Friday, 30 May 2014

Virtual CPU - Signed Addition Clean Up & ROM Discussed

In today's CPU post I want to just clean up the signed addition example, we've covered the electronics but I've had a couple of messages asking how I might integrate switching into the CPU.

Well, simply for our Virtual CPU I'm going to include the signed addition based on the signed mode flag... We already had this signed mode flag in the CPU, and we default the flag to false or "off".

So that is a simple "if" statement within the "Add" function.

To integrate the switching we invent two new OP Codes, one to switch into Signed processing and one to switch to Unsigned processing.


I'll leave you guys to think about how the programmer has to remember which mode they were in, and hence what the bit patterns they have represent.

I'm also going to leave multiplication whilst in Signed Mode as an exercise for you to address yourselves, if you want to mail me your solutions, I'll happily take a look (if I find a minute).

So our op codes now run from zero to twenty-seven.  So with 28 operations what could a machine do?

You might think not very much, but the real 4004 (though not yet the same instruction codes as our virtual code) operated with just 46 instructions total, more than our code at present, but still not a lot.  Intel have kindly published scanned copies of their original datasheets and so we can peek into the depths of their instruction set here:


Numerically we can see almost immediately their instructions 2 and 3, are about Fetching... Fetching Immediate and Fetching Indirect (from ROM)... What is Fetching?  Well, in other machines, many assemblers and in our Virtual CPU the concept of Fetching is called "Loading" and we Load0 and Load1.  Both those instructions load from memory into the CPU, this is "Immediate" it immediately moves a value from volatile storage into the processor.

Indirect for our CPU would actually be the main program loading a program from a file, the file is our ROM or non-volatile storage and we load it into the RAM to use it.  However, we don't fetch from ROM.

I had been asked to add a ROM to the Virtual CPU, however, all a ROM is is addressable memory which can't be changed, so if you wanted to create a ROM yourself you can, create a byte array in your program, load from a disk file, or just insert data into the array upon construction.

And then add two new Op Codes you want to fetch from the ROM.  You can then add the ROM to your CPU as a reference...

I hope this gives you some ideas and you go a head and try to write a small ROM.

Next on our agenda will be "Interupts"... Stay Tuned.

WarThunder - OpenGL Performance Discussed

In the final of my series regarding WarThunder this week, I've taken a look at the performance of the OpenGL rendering options.

I've tried this on two machines, with different graphics cards, but very similar specifications otherwise, the two machines are:

Core i7 - 950 - 3.07Ghz
16GB 1600Mhz DDR3 RAM
NVidia GeForce 260 GTX 896MB DDR3

Core i7 - 3770 - 3.01Ghz
8GB 1600Mhz DDR3 RAM
Nvidia GeForce 540 2GB DDR5

Clearly one machine has the superior graphics performance on DirectX mode, yes the 540 out does the 260, just... Though my 260 is a really good superclocked edition, so much so its proving hard to find an economical replacement (pushing me to have to buy a 770 GTX just to see a significant improvement).

Most all the screen shots you see of WarThunder on this blog come from the machine with the 260 in it as well.

But so far all the screen shots have been from the Direct3D renderer.

Now, however I'm testing the OpenGL renderer, the option is listed as Test and I've also tried to collaborate my observations with other players... My overall impression is:

Half the FPS, if I got 60fps in a situation in a plane in Direct3D, I'd get 30 in the near same situation in OpenGL.  This is a rough and ready measurement taken by eye from the FPS reading given on screen in the client.

I've purposefully NOT recorded my game play, I've not run other programs nor over reached things - that is I've not thrown clouds of AI aircraft into formation within a custom battle and flown through them.  I've stuck to playing the regular game in the regular way.

Lets take a look at some screen shots:




It looks the same, I've not noticed any surface level differences, the game looks just as good.

Only when one scrutinizes things do you find the differences, if we jump into a cockpit and look around, we can see where in Direct3D we'd get smooth textures things can look quite odd:


This is the seat over my pilot's left shoulder in the Fw190 A1, as you can see there's this square stippling going on, also along the window rail you can see a darker indication that is actually a shadow from the wind shield surround.

This isn't the only odd thing noted, in the 540 card, not on the 260, I took the Fw190 into a twilight flight.  You can change this yourself in your settings under the Graphics options you need to select the Texture type of "Night as Day" (or something like that)...

When I flew the plane out on the 540 GTX, I saw this strange halow effect around the periphery:


The effect persisted against the clouds, from any angle, or against the ground.  But it did not show against clear sky.


Switching back to Direct3D no such halow was present on either card's output.

Performance wise I've already mentioned the lower frame rate, however, at one point during a pursuit in Historical battle in the Me410 I took out an enemy aircraft, with the explosion close I suddenly had a frame rate reduction down to 2FPS.  Literally slide show mode...

I used a little back stick to bring my nose up and came through it, but I'd moved around 1km before the frame rate returned to above 20 and allowed me to have some semblance of control.

I checked my logging, there was no hard drive activity, no appreciable network lag, no dropped packets, I could type into the client and ask "Anyone else getting lag", which was a reply of "nope" from a fair few players.  Yet I was at 2FPS 50 meters off of the hard deck struggling.

The optimisation of the Direct3D code being used by Gaijin is really good, they get performance out of my rigs other less intensive, less pretty, games get... So perhaps we can chalk this massive red blot on the performance of the OpenGL implementation to a lack of development/optimisation and of course the code being "Test".

But, one would assume the Mac client is already using OpenGL, are any Mac players out there able to share their thoughts?

Either way it looks promising that OpenGL does work, and this of course leaves us thinking about Linux/SteamOS.

Thursday, 29 May 2014

Is WarThunder coming to Linux?

I've been musing about the future of many of the titles on Steam, and their progress in converting them to Linux; or more specifically the Steam OS.  Valve (the publishers behind Steam) are bit into providing technical talks and information to developers to help assist them in converting titles to OpenGL, so I've taken sometime to look at the games I have on Steam and which ones I'd like to see on Windows.

The first, and most recently played was "WarThunder", and after a little reading and a little looking, I spotted in the launcher the option to use "OpenGL"...


It looks promising therefore that Gaijin are porting the rendering engine to OpenGL, I even went as far as switching to OpenGL an firing up the game, my immediate feedback would be that where I was getting 60FPS+ and even 90 FPS in places, I was suddenly getting an average of 43FPS.  However, the OpenGL rendering is listed as "test" and one would assume its not optimized at all yet.

Performance changes aside it does look like Gaijin could have a Linux operable version soon... I just hope when they fix the annoying team text chat bug soon... What bug is this?  Well, when you crash, or die, you are presented the team chat box; and I often start to type into it information to pass onto my team where I went down, especially if I went down due to enemy fire... You can be midway through typing a message when the screen times out and changes from the view of the opponent who put you down to the "Select a plane" or "Observer" screen... This then re-presents the same (or an identical looking) team chat box, but it has wiped out the input you had just seconds before... This is a bug which has been in the game since I started playing, so it is a very annoying problem and one which either isn't getting reported, or isn't getting noticed and fixed.

Back to Gaijin's development model however, if we search for news or even output regarding their porting to Linux there's little initially to go on, in fact on an unbiased Google search this was the first result.

http://forum.gaijinent.com/index.php?/topic/9346-linux-port/

"At the moment only for Windows. About Mac and Linux we will see in the future."  A comment by a Gaijin Forum Administrator on the 28th July 2012.  So have things changed since then?  The launcher says they have... Pawing through the archives it also appears that OpenGL has been quietly present in the builds since early November 2013.

My question to Gaijin therefore would be, have they been forcing any test clients into OpenGL mode, have they been gathering information, or switching us players to OpenGL to gather rendering information?  Because, it is a setting in a solemn corner of the launcher, but so very important to us Linux-philes, for if they release this game for Linux then I'm down to only three titles needing Windows specifically, I could finally look at just playing their game on a native Linux platform...

Linux is not the first new platform they've targeted with the engine, you can now get the option from the website to download the Mac version of the game client, clearly the Mac version is going to leverage OpenGL, and indeed any Mac based players will be testing the formal OpenGL calls.  Unfortunately that doesn't test for quirks in Linux or Windows with their independent implementations of OpenGL.

Steam OS is also a very new kid on the distro block for Linux, so if they're going to target it surely they're going to also be able to give us tweaks to run it on standard Debian or Ubuntu distro's too.

Excitingly I've spoken to Gaijin, and had a reply from one Alexander Trifonov, his message is short and sweet:


"Linux/SteamOS, please wait a little more and there will be some news"


Thank you for your reply Alexander, we look forward to more news from you soon...


Wednesday, 28 May 2014

The Brave 600

Just a little note to the world, this is my six hundreth post, who'd have thought I would waffle on so long...

Tuesday, 27 May 2014

WarThunder - P400 - More Flyouts

I did get to fly out a few more times over Monday, it being a Bank Holiday I figured what the hell... I have after all, decorated two rooms, found a secret hidden power socket - shhh its a secret!

And despite my GPU woes, you have to admit WarThunder is a spectacularly pretty game...

I even spent some time, well a couple of minutes, customizing my P400.





Yes I stuck a Union Flag on her, this is not a Union Jack, you only put Jacks on boats or ships, so this is a Flag... It seemed to bring me luck too, as I immediately flew out and downed two German Bombers, a HE111 and a JU88, as well as shooting the heck out of another 88...

This was of course not before I met an Allied ass hat in an F4F whom decided the best way to shoot a Pe3 down was to shoot through me, destroying my tail and causing me to finally crash upon returning to base.  But even for that game I received Battle Trophies, so all was well, which ended well.

These are my first few fly outs in WarThunder in a long while, the reason for my absence was the dearth of "Pilot Knocked Out" I received a few months back, every time I went anywhere near an enemy fighter boom, unconscious.  That however does not seem to be happening any longer... Ground Forces also adds a nice touch to the game, but I'm having sound issues with that part of the game... Which I'm not fully investigated my end, before I start moaning at Gaijin.

Monday, 26 May 2014

WarThunder - Flyout

Despite being very busy, and working on many projects, I have tonight gotten myself some time in the pilot seat and flown out in my German plans in WarThunder, flying arcade due to the derth of Historical battles (which I prefer).

I then switched to my American aircraft, a much unloved line in my tech tree, where I've only just reached into some of the Tier II aircraft, I think my problem with the American aircraft is that compared to either the British, Japanese or German lines (which have had my focus) the Americans seem very stiff.

The last plane I unlocked however was the P400... 


And just now I had a hell of a flight in a Historical battle in it, however, the game froze up... Sounds carried on being played, I alt-tabbed out to check my network, which was fine, then tabbed back and it just kept refreshing the screen as black.

I relogged into the server and it told me the battle was over, but it was one of my best historical fly-outs as the Americans ever, I took out two player planes, one AI plane and strafed seven ground units, getting a 6x streak on the ground units, and was on the eight just as the game froze...

I'd have RTB and come back for more too, I was really enjoying the flyout.


One disastrous thing for me at the moment however is the power of my Graphics card, its getting very long in the tooth, its the oldest piece of kit in my machine.  The last machine rebuild was everything except the graphics card, so I have a Core i7, 16GB of 1600mhx DDR3 ram, but this hideously underpowered GTX 260 Graphics card.

The problem with changing the card however is that the 260 does a pretty decent job, as you can see above, it does not look terrible.  And the performance increase against the budget market cards doesn't really impress me.

So my only upgrade path would be to leap up to a GTX 770, or GTX 780...  I have eyed up the EBGZ GTX 770 2GB Superclocked card, with its wallet smacking £260 price tag however, its nothing more than a pipe dream... I have bills to pay, a house to sort and a wife to support... I dream however, I dream.

Wednesday, 21 May 2014

Virtual CPU - Signed Addition & Endianess

From yesterdays post then, we should have learned something and perhaps even gone to look for a solution, you may have even coded a solution into the CPU code we're working on...

The solution I'm going with however is a total cheat, I'm going to add the Cout of the last adder back onto the result as a single full-adder...


Essentially we add the carry out into the first bit again.  But we only want to do this when using a Signed value... so we'll cheat and use out "Signed" flag to do this new function or the original function... Lets get on with creating "AddTwoSignedBytes":

void Electronics::AddSignedTwoBytes (
byte& p_Register0,
byte& p_Register1,
byte& p_Result,
bool& p_Overflow,
const bool& p_Debug)
{
bool l_CarryIn = false;
bool l_CarryOut = false;
bool l_Sum = false;

// For each bit we need to mask the
// right most bit out of the register
// meaning, we start at 00000001 and
// for each loop move the register
// so the bit we're interested is over
// the 8th position.


// Our mask never changes
byte l_mask = 0x01;

// For each bit we run the masking 
// then adder and handle switching
// the result into the register.
// You can find more efficient ways!
for (int i = 0; i < 8; ++i) // 8 bits in a byte
{
if ( p_Debug )
{
std::cout << "Cycle: " << i << std::endl;
std::bitset<8> msk { l_mask };
std::cout << "Mask: " << msk << std::endl;
std::bitset<8> reg0 { p_Register0 };
std::bitset<8> reg1 { p_Register1 };
std::cout << "Register 0 [" << reg0 << "]" << std::endl;
std::cout << "Register 1 [" << reg1 << "]" << std::endl;
}

// Get the A & B bits by shift & masking
// the register
bool A = ( ( ( p_Register0 >> i ) & l_mask) == 1);
bool B = ( ( ( p_Register1 >> i ) & l_mask) == 1);

// We have the carry in and the A & B now, so
// we can call the adder
// Because the Carry out, and the Sum, are separate
// in our code here, we don't need to alter "reg0" or
// "reg1", we can just logically add the bits set
// into the p_Result below!
Adder(A, B, l_CarryIn, l_CarryOut, l_Sum, p_Debug);

if ( p_Debug )
{
// This should be a value from our Adder trace table!
std::cout << "Adding: " << A << " " << B << " " << l_CarryIn << " | " << l_CarryOut << " " << l_Sum << std::endl;
}

// The carry out simply becomes the carry in
// I'm sure you can see one way to optimise this already!
l_CarryIn = l_CarryOut;

// Now the register change based on sum, but
// we also output the binary
if ( p_Debug )
{
std::bitset<8> resultBefore { p_Result };
std::cout << "Result Change: " << resultBefore << " -> ";
}

// Now the logic
// Now instead of pushing the logical
// summing into "Register0" parameter,
// we push it into the p_Result parameter!
if ( l_Sum )
{
// Mask is shifted, and always 1 in the i position
// so we always add a 1 back into the target
// register in the right location
p_Result = p_Result | ( l_mask << i);
}
else
{
// We know the mask is ON, so inversing it and moving it
// will give us an always off...
p_Result = p_Result & ~(l_mask << i);
}

// The register changed, so finish the debug statements
if ( p_Debug )
{
std::bitset<8> resultAfter { p_Result };
std::cout << resultAfter << std::endl;
}
}

//======================================
// Add the carry out to the first bit again
bool A = ( ( p_Result & 0x01) == 1);
// Take the first bit
Adder(A, l_CarryOut, 0, l_CarryOut, l_Sum, p_Debug);
// Now the logic
// Now instead of pushing the logical
// summing into "Register0" parameter,
// we push it into the p_Result parameter!
if ( l_Sum )
{
// Mask is shifted, and always 1 in the i position
// so we always add a 1 back into the target
// register in the right location
p_Result = p_Result | 0x01;
}
else
{
// We know the mask is ON, so inversing it and moving it
// will give us an always off...
p_Result = p_Result & ~0x01;
}
//======================================

// The final carry out becomes our
// over flow
p_Overflow = l_CarryOut;
}

So with this code, we need to test the function, lets add a new test function:

void Electronics::TestSignedAdd()
{
byte l_A = -127;
byte l_B = 7;
byte l_result = 0;
bool l_Overflow = false;

AddSignedTwoBytes(l_A, l_B, l_result, l_Overflow);

std::cout << "Testing signed add:" << std::endl;

int l_v = 0;
for (int i = 0; )

std::cout << "(" << (int)l_A << " + " << (int)l_B << ") = " << (int)l_result << std::endl;
}

Now, before we run the code, what do we expect to see?.. Well, we expect the value of -120, which has the the binary pattern 10001000.

Lets run the code and see...


What the hell just happened?... 129 + 7... That's not what our code says... and the answer is 136... What is going on!??!?!

Calm down, calm down, everything is fine... The binary pattern of the result is correct... see...


So what is with the values we see on the screen, if our register holds the binary pattern for -120, our result...!?!?!

Well, the signed binary for -120 is the same patter as the unsigned value 136!  Its as simple as that, our CPU is working its our C++ which has thrown us a curved ball.

The cout stream took the byte we send and converted it for display, but the byte itself has no knowledge of signing, it is infact an unsigned char when we defined it as a type.  So the binary might be perfectly fine, but the interpretation of that binary is wrong.

This is a case of being careful with how you test your code, and is an example where at least two tests are needed to confirm a result, never take the result of just one test as canonical.  Always try to find some other way to test a value you calculate, or validate your input, or put a bounds around your system.  Because when something goes wrong you always need to second check yourself before complaining to others.

In the case of this code cout is showing the unsigned values, and that's fine, we can ignore it because to get the true binary we can just use bitset...

#include <bitset>
std::bitset<8> l_binary (p_Result);
std::cout << l_binary << std::endl;

This is a lesson to learn in itself, always to check & recheck your code.

But now we have to think about the reprocussions for this in our CPU, even if we've set the signed flag the data are being stored unsigned, the memory is storing the values just as patterns of binary... And this is an important thing to keep in mind when you're programming, when you are working with a CPU, how the value is expressed is more important than how it is stored, because (hopefully) the binary representation is going to be the same all the time...

OR IS IT?

Unfortunately not, our binary we've dealth with so far is what we could call "Little-Endian" that is the lowest value assigned to a bit in the byte starts on the right, and we read the byte from right to left.  Essentially the opposite way we would read this very English text...


If we read the byte the opposute wa around then the values would be reversed:


This is called Big-Endian.

Intel processors have pretty much always been little-endian, whilst other firms have used big-endian, notable platforms using big-endian processors are the Motorolla 680x0 family of CPU's.  Yes, those of Atari ST's and Amiga's, the original Mac.. They all had bit-endian CPU's.

Some said this set a gulf between the two, and emulating between the two systems is very time consuming, because to emulate a big-endian processor on a little-endian machine used to mean a lot of overhead in converting between the binary representations.

Our CPU is going to suffer from this problem, because we've built it, and its adder to use little-endian principles, e.g. we start the adder loop from 0 to n-1, where as for a big-endian machine we'd want to start the adder loop at n-1 and go down to 0 to complete an addition.

A challenge would be to go back over our whole CPU and convert it for Endianess, making it a generic, configurable hardware implementation of a generic 8bit logical operating unit... I'm not going to do it, I'm just here to guide your experience.

Tuesday, 20 May 2014

Virtual CPU - Adders & Signing Discussed

Let us review the physical electronics our code emulated for addition, we created the function "AddTwoBytes" which in
turn used "Adder", the adder being the code:

Sum = Cin ^ (A ^ B);
Cout = (A & B) | (Cin & (A ^ B));
This is of course the code representation of electronic logic gates "AND" "OR" and "XOR", we could have gon further and rather than use "XOR" as a single operation we could have broken it down into separate "AND" and "OR" gates itself.  This is how the first computers worked after all.  A good electronics primer is a good place to start looking at logic gates in more detail.

But the use of "XOR" as a single unit, rather than a more complex set of other logic gates is what we programmers would call encapsulation.

We Encapsulated this whole logic into a call called "Adder" and we encapsulated its use to add each bit of two bytes into yet another function.

Luckily electronics engineers have been at this encapsulation lark as long as us programmers, and so instead of representing the adder logic like this:


They've gone a head and made the Adder look like this:

And then if we think about wiring each bit of two bytes through adders, each adder passing its carry out to the carry in of the next we get this wonderfully complex diagram:


Do not be scared by this, yes I drew it by hand so there maybe bugs, but all I want you to gleam from this is how complex the electronics are, because remember this mass of wires and adders and inside each adder the logic gates equates to the simple loop in the "AddTwoBytes" code!

This is one of the reasons many people emulate electronics or CPU's or just this kind of gated logic in software, because creating the hardware can be so much more complex, costly and hard to get right first time, but code we can change with a wave of the hand.

This representation, wiring each bit from the registers, is purposefully complex however, and there are other adders you can look at.

So how does all this get us signed numbers in our CPU?

When we enter signed mode in the CPU we want to consider the top most bit of our bytes as a sign, when this bit is off (0) then the number is a positive, when the bit is on (1) then the number is negative.

The flag doesn't really mean "and now this is negative", it is actually that we change the meaning of the value, so the top most bit changes from having a value of +128, to having a value of -128.

Hence the binary, 00000111 is still 7... but 1000111 is now ( -128 + 4 + 2 + 1 ) holding the value -121.

Back to our electronics then, if you have the top most bit set and you over flow, do we want to just throw it away as an error?... Well no, because it maybe that the number has just gone from being a negative to a positive, or its gone from a positive to a negative... So we want to carry the overflow back to the first adder...


But, of course this won't work, because you can't have an input to a process which is that very same process's output...

So what do those sneaky electronics chaps do?

Well... I'm not going to tell you... Go find out...

Friday, 16 May 2014

Hogwash - Coal & Oil to "Run Out"

No beating around the busy, what utter rubbish, I just read this from the BBC:


That "UK's oil, coal and gas' to be 'gone in five years', what utter rubbish, perhaps the current pits left open will run out of coal, but the myriad of mines closed in the 80's and 90's lots still had coal to extract, the colliery my father worked at, Gedling, in Nottinghamshire.

It closed, and I remember at the time it was a stated fact "There are 100 or more years of coal down there", I don't doubt the UK would chew through 100 years of coal from that one pit in 5 years, but add all the closed mines together, and there's decades if not millennia of coal down there.

And here in Britain our mines never went to extreme depth, they get down to the hundreds of meters sure, but they don't go down to 1km+ very often.  And we have to look to the history here, coal is just fallen tree's and brassica yes, its compressed and over the years the minerals removed the carbon remaining.  But coal comes from a specific period in history when there were no evolved bacteria which could break down wood.

So as tree's fell they lay layer upon layer to build up.

In the interceding few million of millions of years bacteria have evolved which do break down wood and fibrous plant material, hence why coal is deep, because the conditions to form it stopped occurring.  So higher in the Strata you get gas and oil from old sea beds, but no coal from old forests.

Now, if we go deeper, if we open up the mines Britain is sitting pretty, we could also look at extreme deep mineral recovery off of our seabed and the continental shelf out into the Atlantic, France, Belgium & Holland may even be amenable to collaborating in better exploring the North Sea.

The challenge is to open up those mines again though, and retrieve those minerals, and crucially find a way to use them cleanly, such as sulphur & carbon scrubbers on the outlets of power stations (a technology I still hear of as being "new" yet I was taut all about it in GCSE Chemistry - as if it were the future - in 1991).

So, people should word their reports better, there's not 5 years left, there's 5 years of what we're accessing left, we could access more, we could have hundreds of years of fuel, we just have to invest in it, we just have to get at it... And whilst ever its cheaper to drive the stuff on lorries from Russia, or haul it on barges over the oceans that investment will never happen.

Thursday, 15 May 2014

Setting up Code::Blocks & Boost on Windows

Updated for 2016: https://youtu.be/Mioo8Hnp6M8

Below is an older post, please see the new YouTube Video, like subscribe & if you benefited the tip jar is just there on the right!

Xel - 5th JUne 2016.




Building boost with Mingw32, installed via CodeBlocks....

First things first, download the installer for CodeBlocks for Windows, I selected the mingw32 compiler, at the time of writing this is:

codeblocks-13.12mingw-setup.exe

From the downloads page.

A quick example program can then be thrown together from the IDE, I'm interested in pure C++ using C++11, so I set the compiler to use -std=c++11 switches, as well as fix up the warning settings.


Once we're happy with the test project in the IDE we can go to the same file in a command prompt:


What we've just seen is, adding the compiler installed by codeblocks to the environment path:

PATH=%PATH%;C:\program files\CodeBlocks\MinGW\bin

We then edited the file with note pad:

#include <iostream>

using namespace std;

int main ()
{
cout << "Hello World" << endl;
}

Then from the command line we built the program:

mingw32-g++ -std=c++11 -Wall -o main.exe main.cpp

This gave us our application, and we could run "main.exe".


Now we've got all that working, my project uses the C++ boost libraries, so we need to build them with the mingw32 tools we've installed.


So, again we need a command prompt and the same PATH setting we have above:

PATH=%PATH%;C:\program files\CodeBlocks\MinGW\bin

Then from the boost folder we've extracted the boost source to, we need to build the boost build engine:

bootstrap.bat mingw

Once this is complete, we can build the libraries themselves:

b2 toolset=gcc

We now have the libs for linking in the boost sub folder /stage/lib

And we can find the headers in /boost...

We won't cover accessing them through the command line, instead BACK TO CODE BLOCKS!


So, we fire up code blocks, create a new project, its empty, we create our main.cpp and add "Hello World" in there.

We set the compiler to "-Wall" and "-Weff" and "=std=c++11", and check the compile & link worked.

Then we have to add the boost libraries, these are the system library, now we are in debug mode for this project, so we link against the libboost-system library with an added 'd' for debug.  If we switch to release with the drop down at the top, we need to add the link to the library which is not debug!

We also add the libboost-filesystem library, again in debug.

Finally, in the search directories we tell everything were to find boost itself, this allows our #includes to find boost headers.

I keep the paths relative, so I know whether I move my code and boost to "C:\Code" or "C:\Users\Jon\Desktop" the paths will be okay.

Then in the code, we'll include a check for a file on the file system, so include the boost filesystem header.  Then we add a path to "C:\hello.txt", and an exists check on that file...

CODE::BLOCK BUG/QUIRK!
And we build, now this is the first quirk you'll find, if you just build now the compiler will go off and start to rebuild boost... This is just stupid, so as the text is streaming past, ABORT the build, and then right click on the source file and build from there, you should see nothing needs doing.  Then rebuild again and the project is done almost instantly, this is a quirk of Code::Blocks, as it caches boost into the list of items built.... But we already built boost... 

Follow the video, it'll make sense.


#include <iostream>
#include <string>

#include <boost/filesystem.hpp>

using namespace std;

int main ()
{
    cout << "HEllo World" << endl;


    cout << "Testing if \"C:\\Hello.txt\" exists" << endl;

    boost::filesystem::path l_path("c:\\Hello.txt");
    if ( boost::filesystem::exists(l_path) )
    {

        cout << "File does exist!" << endl;
    }
    else
    {

        cout << "File missing" << endl;
    }

}

From the command line, we would have to add "-l libboost_system-mgw47-mt-d-1_55.a" and "-l libboost_filesystem-mgw47-mt-d-1-55.a", to link against boost.

The boost libraries we have are of course the dynamic libraries, if you wish to contain the libraries into the application you're distributing rather than having to hand out the libraries you've also built, you need to build the application static and rebuild boost with the static flags, which is a topic for another day.