From yesterdays post then, we should have learned something and perhaps even gone to look for a solution, you may have even coded a solution into the CPU code we're working on...
The solution I'm going with however is a total cheat, I'm going to add the Cout of the last adder back onto the result as a single full-adder...
Essentially we add the carry out into the first bit again. But we only want to do this when using a Signed value... so we'll cheat and use out "Signed" flag to do this new function or the original function... Lets get on with creating "AddTwoSignedBytes":
void Electronics::AddSignedTwoBytes (
byte& p_Register0,
byte& p_Register1,
byte& p_Result,
bool& p_Overflow,
const bool& p_Debug)
{
bool l_CarryIn = false;
bool l_CarryOut = false;
bool l_Sum = false;
// For each bit we need to mask the
// right most bit out of the register
// meaning, we start at 00000001 and
// for each loop move the register
// so the bit we're interested is over
// the 8th position.
// Our mask never changes
byte l_mask = 0x01;
// For each bit we run the masking
// then adder and handle switching
// the result into the register.
// You can find more efficient ways!
for (int i = 0; i < 8; ++i) // 8 bits in a byte
{
if ( p_Debug )
{
std::cout << "Cycle: " << i << std::endl;
std::bitset<8> msk { l_mask };
std::cout << "Mask: " << msk << std::endl;
std::bitset<8> reg0 { p_Register0 };
std::bitset<8> reg1 { p_Register1 };
std::cout << "Register 0 [" << reg0 << "]" << std::endl;
std::cout << "Register 1 [" << reg1 << "]" << std::endl;
}
// Get the A & B bits by shift & masking
// the register
bool A = ( ( ( p_Register0 >> i ) & l_mask) == 1);
bool B = ( ( ( p_Register1 >> i ) & l_mask) == 1);
// We have the carry in and the A & B now, so
// we can call the adder
// Because the Carry out, and the Sum, are separate
// in our code here, we don't need to alter "reg0" or
// "reg1", we can just logically add the bits set
// into the p_Result below!
Adder(A, B, l_CarryIn, l_CarryOut, l_Sum, p_Debug);
if ( p_Debug )
{
// This should be a value from our Adder trace table!
std::cout << "Adding: " << A << " " << B << " " << l_CarryIn << " | " << l_CarryOut << " " << l_Sum << std::endl;
}
// The carry out simply becomes the carry in
// I'm sure you can see one way to optimise this already!
l_CarryIn = l_CarryOut;
// Now the register change based on sum, but
// we also output the binary
if ( p_Debug )
{
std::bitset<8> resultBefore { p_Result };
std::cout << "Result Change: " << resultBefore << " -> ";
}
// Now the logic
// Now instead of pushing the logical
// summing into "Register0" parameter,
// we push it into the p_Result parameter!
if ( l_Sum )
{
// Mask is shifted, and always 1 in the i position
// so we always add a 1 back into the target
// register in the right location
p_Result = p_Result | ( l_mask << i);
}
else
{
// We know the mask is ON, so inversing it and moving it
// will give us an always off...
p_Result = p_Result & ~(l_mask << i);
}
// The register changed, so finish the debug statements
if ( p_Debug )
{
std::bitset<8> resultAfter { p_Result };
std::cout << resultAfter << std::endl;
}
}
//======================================
// Add the carry out to the first bit again
bool A = ( ( p_Result & 0x01) == 1);
// Take the first bit
Adder(A, l_CarryOut, 0, l_CarryOut, l_Sum, p_Debug);
// Now the logic
// Now instead of pushing the logical
// summing into "Register0" parameter,
// we push it into the p_Result parameter!
if ( l_Sum )
{
// Mask is shifted, and always 1 in the i position
// so we always add a 1 back into the target
// register in the right location
p_Result = p_Result | 0x01;
}
else
{
// We know the mask is ON, so inversing it and moving it
// will give us an always off...
p_Result = p_Result & ~0x01;
}
//======================================
// The final carry out becomes our
// over flow
p_Overflow = l_CarryOut;
}
So with this code, we need to test the function, lets add a new test function:
void Electronics::TestSignedAdd()
{
byte l_A = -127;
byte l_B = 7;
byte l_result = 0;
bool l_Overflow = false;
AddSignedTwoBytes(l_A, l_B, l_result, l_Overflow);
std::cout << "Testing signed add:" << std::endl;
int l_v = 0;
for (int i = 0; )
std::cout << "(" << (int)l_A << " + " << (int)l_B << ") = " << (int)l_result << std::endl;
}
Now, before we run the code, what do we expect to see?.. Well, we expect the value of -120, which has the the binary pattern 10001000.
Lets run the code and see...
What the hell just happened?... 129 + 7... That's not what our code says... and the answer is 136... What is going on!??!?!
Calm down, calm down, everything is fine... The binary pattern of the result is correct... see...
So what is with the values we see on the screen, if our register holds the binary pattern for -120, our result...!?!?!
Well, the signed binary for -120 is the same patter as the unsigned value 136! Its as simple as that, our CPU is working its our C++ which has thrown us a curved ball.
The cout stream took the byte we send and converted it for display, but the byte itself has no knowledge of signing, it is infact an unsigned char when we defined it as a type. So the binary might be perfectly fine, but the interpretation of that binary is wrong.
This is a case of being careful with how you test your code, and is an example where at least two tests are needed to confirm a result, never take the result of just one test as canonical. Always try to find some other way to test a value you calculate, or validate your input, or put a bounds around your system. Because when something goes wrong you always need to second check yourself before complaining to others.
In the case of this code cout is showing the unsigned values, and that's fine, we can ignore it because to get the true binary we can just use bitset...
#include <bitset>
std::bitset<8> l_binary (p_Result);
std::cout << l_binary << std::endl;
This is a lesson to learn in itself, always to check & recheck your code.
But now we have to think about the reprocussions for this in our CPU, even if we've set the signed flag the data are being stored unsigned, the memory is storing the values just as patterns of binary... And this is an important thing to keep in mind when you're programming, when you are working with a CPU, how the value is expressed is more important than how it is stored, because (hopefully) the binary representation is going to be the same all the time...
OR IS IT?
Unfortunately not, our binary we've dealth with so far is what we could call "Little-Endian" that is the lowest value assigned to a bit in the byte starts on the right, and we read the byte from right to left. Essentially the opposite way we would read this very English text...
If we read the byte the opposute wa around then the values would be reversed:
Intel processors have pretty much always been little-endian, whilst other firms have used big-endian, notable platforms using big-endian processors are the Motorolla 680x0 family of CPU's. Yes, those of Atari ST's and Amiga's, the original Mac.. They all had bit-endian CPU's.
Some said this set a gulf between the two, and emulating between the two systems is very time consuming, because to emulate a big-endian processor on a little-endian machine used to mean a lot of overhead in converting between the binary representations.
Our CPU is going to suffer from this problem, because we've built it, and its adder to use little-endian principles, e.g. we start the adder loop from 0 to n-1, where as for a big-endian machine we'd want to start the adder loop at n-1 and go down to 0 to complete an addition.
A challenge would be to go back over our whole CPU and convert it for Endianess, making it a generic, configurable hardware implementation of a generic 8bit logical operating unit... I'm not going to do it, I'm just here to guide your experience.