Showing posts with label logic. Show all posts
Showing posts with label logic. Show all posts

Friday, 25 September 2015

More Chess - Toxic Players

I'm beginning to find the online Chess community a bit more feeble than I remember... I used to play on Yahoo Games Chess, years ago... I even met a lass through it once, seriously, chess got me laid... However, playing on en.lichess.org, I keep having my opponents leave the match!


In this case the chap had the upper hand, my opening went wrong, he moved his bishop aggressively into my centre, but then my knights rescued the situation and he left?!?!?!  Leaving me to just have to wait for his time to end or "claim victory".


Thursday, 24 September 2015

Chess & Me

I've always been fascinated by board games, Chess was perhaps the first I obsessed over, and I wasn't bad at playing it... When I was around 10 we had a wooden chess set at home, and I'd spend hours looking at positions, but was  never encouraged that this was a worth while thing to do.

When I was in my early twenties, I came back to Chess and had a few books and bits and bobs for it, spent ages analysing positions once.  It's sort of part of me.

Anyway, I recently came across en.lichess.org, and signed up... I had played a few games and got totally thrashed whilst anonymous... But I did a bunch of their training puzzles (some of which I think I saw better options or at least equivalent options for) and then signed up... Ironically when you sign up their capta check is a chess puzzle, and I kept getting it wrong!

But soon sorted and I started my first game, no pressure, so I went for classic 20 minute (plenty of time) game style.  I have this speed chess, I'm not practised enough for it.

Here's my ending board, did I win... I think I did, but the opponent just left :(

(I'm playing white)

Wednesday, 21 May 2014

Virtual CPU - Signed Addition & Endianess

From yesterdays post then, we should have learned something and perhaps even gone to look for a solution, you may have even coded a solution into the CPU code we're working on...

The solution I'm going with however is a total cheat, I'm going to add the Cout of the last adder back onto the result as a single full-adder...


Essentially we add the carry out into the first bit again.  But we only want to do this when using a Signed value... so we'll cheat and use out "Signed" flag to do this new function or the original function... Lets get on with creating "AddTwoSignedBytes":

void Electronics::AddSignedTwoBytes (
byte& p_Register0,
byte& p_Register1,
byte& p_Result,
bool& p_Overflow,
const bool& p_Debug)
{
bool l_CarryIn = false;
bool l_CarryOut = false;
bool l_Sum = false;

// For each bit we need to mask the
// right most bit out of the register
// meaning, we start at 00000001 and
// for each loop move the register
// so the bit we're interested is over
// the 8th position.


// Our mask never changes
byte l_mask = 0x01;

// For each bit we run the masking 
// then adder and handle switching
// the result into the register.
// You can find more efficient ways!
for (int i = 0; i < 8; ++i) // 8 bits in a byte
{
if ( p_Debug )
{
std::cout << "Cycle: " << i << std::endl;
std::bitset<8> msk { l_mask };
std::cout << "Mask: " << msk << std::endl;
std::bitset<8> reg0 { p_Register0 };
std::bitset<8> reg1 { p_Register1 };
std::cout << "Register 0 [" << reg0 << "]" << std::endl;
std::cout << "Register 1 [" << reg1 << "]" << std::endl;
}

// Get the A & B bits by shift & masking
// the register
bool A = ( ( ( p_Register0 >> i ) & l_mask) == 1);
bool B = ( ( ( p_Register1 >> i ) & l_mask) == 1);

// We have the carry in and the A & B now, so
// we can call the adder
// Because the Carry out, and the Sum, are separate
// in our code here, we don't need to alter "reg0" or
// "reg1", we can just logically add the bits set
// into the p_Result below!
Adder(A, B, l_CarryIn, l_CarryOut, l_Sum, p_Debug);

if ( p_Debug )
{
// This should be a value from our Adder trace table!
std::cout << "Adding: " << A << " " << B << " " << l_CarryIn << " | " << l_CarryOut << " " << l_Sum << std::endl;
}

// The carry out simply becomes the carry in
// I'm sure you can see one way to optimise this already!
l_CarryIn = l_CarryOut;

// Now the register change based on sum, but
// we also output the binary
if ( p_Debug )
{
std::bitset<8> resultBefore { p_Result };
std::cout << "Result Change: " << resultBefore << " -> ";
}

// Now the logic
// Now instead of pushing the logical
// summing into "Register0" parameter,
// we push it into the p_Result parameter!
if ( l_Sum )
{
// Mask is shifted, and always 1 in the i position
// so we always add a 1 back into the target
// register in the right location
p_Result = p_Result | ( l_mask << i);
}
else
{
// We know the mask is ON, so inversing it and moving it
// will give us an always off...
p_Result = p_Result & ~(l_mask << i);
}

// The register changed, so finish the debug statements
if ( p_Debug )
{
std::bitset<8> resultAfter { p_Result };
std::cout << resultAfter << std::endl;
}
}

//======================================
// Add the carry out to the first bit again
bool A = ( ( p_Result & 0x01) == 1);
// Take the first bit
Adder(A, l_CarryOut, 0, l_CarryOut, l_Sum, p_Debug);
// Now the logic
// Now instead of pushing the logical
// summing into "Register0" parameter,
// we push it into the p_Result parameter!
if ( l_Sum )
{
// Mask is shifted, and always 1 in the i position
// so we always add a 1 back into the target
// register in the right location
p_Result = p_Result | 0x01;
}
else
{
// We know the mask is ON, so inversing it and moving it
// will give us an always off...
p_Result = p_Result & ~0x01;
}
//======================================

// The final carry out becomes our
// over flow
p_Overflow = l_CarryOut;
}

So with this code, we need to test the function, lets add a new test function:

void Electronics::TestSignedAdd()
{
byte l_A = -127;
byte l_B = 7;
byte l_result = 0;
bool l_Overflow = false;

AddSignedTwoBytes(l_A, l_B, l_result, l_Overflow);

std::cout << "Testing signed add:" << std::endl;

int l_v = 0;
for (int i = 0; )

std::cout << "(" << (int)l_A << " + " << (int)l_B << ") = " << (int)l_result << std::endl;
}

Now, before we run the code, what do we expect to see?.. Well, we expect the value of -120, which has the the binary pattern 10001000.

Lets run the code and see...


What the hell just happened?... 129 + 7... That's not what our code says... and the answer is 136... What is going on!??!?!

Calm down, calm down, everything is fine... The binary pattern of the result is correct... see...


So what is with the values we see on the screen, if our register holds the binary pattern for -120, our result...!?!?!

Well, the signed binary for -120 is the same patter as the unsigned value 136!  Its as simple as that, our CPU is working its our C++ which has thrown us a curved ball.

The cout stream took the byte we send and converted it for display, but the byte itself has no knowledge of signing, it is infact an unsigned char when we defined it as a type.  So the binary might be perfectly fine, but the interpretation of that binary is wrong.

This is a case of being careful with how you test your code, and is an example where at least two tests are needed to confirm a result, never take the result of just one test as canonical.  Always try to find some other way to test a value you calculate, or validate your input, or put a bounds around your system.  Because when something goes wrong you always need to second check yourself before complaining to others.

In the case of this code cout is showing the unsigned values, and that's fine, we can ignore it because to get the true binary we can just use bitset...

#include <bitset>
std::bitset<8> l_binary (p_Result);
std::cout << l_binary << std::endl;

This is a lesson to learn in itself, always to check & recheck your code.

But now we have to think about the reprocussions for this in our CPU, even if we've set the signed flag the data are being stored unsigned, the memory is storing the values just as patterns of binary... And this is an important thing to keep in mind when you're programming, when you are working with a CPU, how the value is expressed is more important than how it is stored, because (hopefully) the binary representation is going to be the same all the time...

OR IS IT?

Unfortunately not, our binary we've dealth with so far is what we could call "Little-Endian" that is the lowest value assigned to a bit in the byte starts on the right, and we read the byte from right to left.  Essentially the opposite way we would read this very English text...


If we read the byte the opposute wa around then the values would be reversed:


This is called Big-Endian.

Intel processors have pretty much always been little-endian, whilst other firms have used big-endian, notable platforms using big-endian processors are the Motorolla 680x0 family of CPU's.  Yes, those of Atari ST's and Amiga's, the original Mac.. They all had bit-endian CPU's.

Some said this set a gulf between the two, and emulating between the two systems is very time consuming, because to emulate a big-endian processor on a little-endian machine used to mean a lot of overhead in converting between the binary representations.

Our CPU is going to suffer from this problem, because we've built it, and its adder to use little-endian principles, e.g. we start the adder loop from 0 to n-1, where as for a big-endian machine we'd want to start the adder loop at n-1 and go down to 0 to complete an addition.

A challenge would be to go back over our whole CPU and convert it for Endianess, making it a generic, configurable hardware implementation of a generic 8bit logical operating unit... I'm not going to do it, I'm just here to guide your experience.

Tuesday, 20 May 2014

Virtual CPU - Adders & Signing Discussed

Let us review the physical electronics our code emulated for addition, we created the function "AddTwoBytes" which in
turn used "Adder", the adder being the code:

Sum = Cin ^ (A ^ B);
Cout = (A & B) | (Cin & (A ^ B));
This is of course the code representation of electronic logic gates "AND" "OR" and "XOR", we could have gon further and rather than use "XOR" as a single operation we could have broken it down into separate "AND" and "OR" gates itself.  This is how the first computers worked after all.  A good electronics primer is a good place to start looking at logic gates in more detail.

But the use of "XOR" as a single unit, rather than a more complex set of other logic gates is what we programmers would call encapsulation.

We Encapsulated this whole logic into a call called "Adder" and we encapsulated its use to add each bit of two bytes into yet another function.

Luckily electronics engineers have been at this encapsulation lark as long as us programmers, and so instead of representing the adder logic like this:


They've gone a head and made the Adder look like this:

And then if we think about wiring each bit of two bytes through adders, each adder passing its carry out to the carry in of the next we get this wonderfully complex diagram:


Do not be scared by this, yes I drew it by hand so there maybe bugs, but all I want you to gleam from this is how complex the electronics are, because remember this mass of wires and adders and inside each adder the logic gates equates to the simple loop in the "AddTwoBytes" code!

This is one of the reasons many people emulate electronics or CPU's or just this kind of gated logic in software, because creating the hardware can be so much more complex, costly and hard to get right first time, but code we can change with a wave of the hand.

This representation, wiring each bit from the registers, is purposefully complex however, and there are other adders you can look at.

So how does all this get us signed numbers in our CPU?

When we enter signed mode in the CPU we want to consider the top most bit of our bytes as a sign, when this bit is off (0) then the number is a positive, when the bit is on (1) then the number is negative.

The flag doesn't really mean "and now this is negative", it is actually that we change the meaning of the value, so the top most bit changes from having a value of +128, to having a value of -128.

Hence the binary, 00000111 is still 7... but 1000111 is now ( -128 + 4 + 2 + 1 ) holding the value -121.

Back to our electronics then, if you have the top most bit set and you over flow, do we want to just throw it away as an error?... Well no, because it maybe that the number has just gone from being a negative to a positive, or its gone from a positive to a negative... So we want to carry the overflow back to the first adder...


But, of course this won't work, because you can't have an input to a process which is that very same process's output...

So what do those sneaky electronics chaps do?

Well... I'm not going to tell you... Go find out...

Thursday, 8 May 2014

Domino Computing

As an adjunct to our C++ based Electronics Code.. How about Domino based computing?



Sunday, 27 April 2014

Binary Adder written in C++ (Binary Logic in Code)

In our virtual CPU so far, when adding we've simply used the arithmetic operation "+" on our bytes.

You can see Part 1 or Part 2 yourself.

Of course this is NOT how a real processor handles adding, a real processor adds in binary.  There are many better sources for binary addition instructions out there, so rather than labour the point let us just cover the important point which is carrying...

Given a 4 bit binary number, we could add like this:

  0000
+ 0000
------
  0000  (No Carries)
  

  0001
+ 0000
------
  0000  (No Carries)
  

  0001
+ 0001
------
  0010  (Carry over 1 to the next column)  

What we're doing is looking at the columns and performing a truth operation, 0 + 0 = 0, 0 + 1 or 1 + 0 = 1 and 1 + 1 = 1 plus a carry... The first column (the right most) is always starting with a carry of zero.... So in this third binary sum the second column when we come to evaluate it will have a carry over,  and the sum is now 0 + 0 + 1 = 1.

Essentially we take each column, plus an input so three inputs and turn it into two outputs, one being the value we want to store and the other being the third input into the next column.

This is known as a combinational logic circuit, and again there are many many other better sources to explain this to you...

What has this got to do with our CPU though?  Well, we've so far added with the processor in our machine, we've not looked at the actual nuts and bolts of addition.  At the silicon level what we have is a bunch of logic gates, these flow electrical current around.

We'll consider the positive electrical current represented by a "boolean" set to true, and a negative or null, electrical current as a "boolean" set to false.

When we present our Register instead of a byte in our code really we should hold a set of 8 boolean's.

I'm not going to go that far, instead I'm going to implement some electronics in software to manipulate our bytes like they're binary electrics.

The first thing we need to think about is something known as an Adder, you can go look elsewhere for the full low down, however there are two kinds of adder, a full adder which takes the two columns and the carry in and gives the sum plus a carry out... And then there is a half-adder, which just adds the two binary values together and it forms a sub-part of the full adder.

Now these adders are circuits with logic gates, we've got similar logic for acting on boolean's in C++... "&&" is a logical "AND"... "||" is a logical "OR"....

With this code:

bool a = true;
bool b = false;
bool c = a && b;

c will have a value of false, both a AND b were not true.

bool d = a || b;

d will be true, because one of a OR b was already true...

We need three logical operations to create an adder... AND and OR, but also we need XOR, this is Exclusive OR, whereby in the example above d would only be true if only one of the inputs was true, if both inputs were true, or both inputs were false then d would have been false.

Lets get on with a live coding session creating our electronic logic in code form.  We're going to build debugging into this, so you will see examples of both Binary Logic, Binary Shifting and the std::bitset, all useful things for manipulating binary data in programs.


So that's a binary adder implemented in C++.

But on its own its not much use, we need to add two of our registers together.

In a real CPU the adders would be wired to the register bits with wires, physical gold wires inside the chip.  But we can't do that, so instead I'll pass both registers by reference to a function which will operate directly on them and return the new value plus whether there was an overflow.

So this function will carry out the binary addition on each column in the register, I'll do this in code with a moving mask, to I shift the bit we're interested into the right hand most column each time and then run the adder on it and then mask and shift the bit result back into the register.

You may find other ways... That's a challenge!


So now we can see adding two unsigned bytes, and their overflowing being reported.

How might we then change our CPU?... Well, this adder only works in unsigned mode, so in our CPU we can check if we're in Unsigned mode and employ this adder and now successfully set the over flow flag!

With that done, lets think what else you need to do... Well, in our Machine Code programs it would be nice to know whether we've had an over flow after an addition...

Now, with an addition in our CPU we can see that the adder leaves some value in register 0 which might be of use, it might have overflowed, but it contains the remainder.

So, how about we add an instruction Op Code which loads the overflow flag into register 1.  We can then save register 0 somewhere, clear register 0 and do a jump equal based on the value!

I'll leave you to do this yourself... There is no code with this edition, the code will come later, but you can see my code in the video... Lets see those YouTube likes & subscriptions folks!