Monday 30 July 2012

Olympic Seats


I'm getting pretty tired of these documentaries about British Olympians, there's been two I've seen, one about a diver and one about a Judoka... Both of whom have gone on to fail to get a medal, sometimes even to fail to qualify...

Its all rather tiresome, and very British, the moment anyone says "we might win", we either worry so much and balls it up, or we get cocky and balls it up.... The  net result is a lot of ball and nearly no medal...

So people, stop bigging these people up, let them get on doing their own little things, and go get moaning about the stupic seating system which in my experience put people off trying to buy tickets and which has resulted in the fiasco of swathes of empty seats... And appart from the Archery (which had very limited numbers of seats) I've seen multitudes of empty seats at the Judo, at the Swimming, at the Kayaking and at other events... The dressage too!...

The only place I've not seen empty seats was the beach volley ball ladies matches!!!!

Sunday 29 July 2012

Code Kicker

This evening, I've relived the good old days, I've spent a lot of time (about five hours) coding... This is no big deal, but... I've done this late at night, I've done it after a few drinks... and I've achieves a massively impressive (at least to myself) amount of work, changing the whole underlying strata of a project I'm working on... and it all worked first time... (well apart from spelling progress without an r, its done).

And this is the good old days, because it was just like being back at Uni... I loved coding at uni, I think I picked a degree with too little actual coding within it... But tonight I relived that feeling of achieving something.

Thursday 26 July 2012

Charlotte Blackman

I'm going to keep this one short and sweet... My thoughts, sympathy and condolences go out to all of Charlottes immediate Family, close friends of ours.

Such a wonderful family, such nice people, we're absolutely devastated for you and only wish that things were different.


My wife is devastated by the news, she recalls only how wonderful Charlotte was, remembering her coming and doing some cleaning around our house years back.  Only a fortnight ago we had a meal out with Rach and she was telling us how proud she was of Charlotte getting a first in her degree.

It is so unfair for this to have happened, so unkind of the world to have thrown this burden of grief at such undeserving people, and I can only imagine doubly so for Matt.

As I understand it the family won't return until they can bring Charlotte with them, and until we see them I can only hope they know our thoughts are with them.

Monday 23 July 2012

Why....

Why do people not phone me (I have a phone folks which tells me I have missed calls, it in fact logs all my calls in and out) and then say "I've tried to get you, but it just rang and ran".... No it didn't, you have not tried to get me at all, you've waited for me to call you you shady sod.

Why do people post videos on YouTube, of what might be interesting material, and.then.they.talk.in.a.monotone.drone.so.you.stop.the.video.and.don't.listen.to.them... I mean, look around people, look at how the BBC or Sky, how interviewers present information in the form of the spoken word and emulate them!.... Don't just sit and drivel on... Also... Don't... please don't just put yourself as the focus of the camera... If you're on YouTube talking about code, then for christ sake put some code up, don't let us see your pimply stub nosed shit eating grins.

Why do people not look at road signs?... I mean, they're up there for a reason, telling  you to merge from the right, or give way to the left, or that cows are crossing... all very useful information... Not to mention, lanes have arrows in them saying where they go, if you are at an island going right and there are two lanes going right... think "Hey directly after this island I need to take a left"... and get into the left most of the right flowing lanes... don't be right, go right and then cut everyone up going left and look at the other people like they're morons, you're the moron!

Why do people not accept factual presentations from me?... Now, I don't know what it is, but I can be stood holding up a white piece of paper and declaring "This is white"... and my audience will stroll off... Yet someone I'm sat looking at right now can just say "This white paper, see it.... its fucking blue dude"... and they go... "Oh yes, blue blue blue..."...


Sunday 22 July 2012

Using boost::signal Example

From my previous post you maybe aware I had a problem integrating two different modules, produced separately and in isolation, which when brought together instigated a compile time error... Well, over the day I've spent a few hours... well about four... trying to work out what to do...

I had thought I needed some mechanism to be connectionless - hence my use of boost::asio for its UDP operation - which could deliver the messages with as little programmer (or rather code dependence on the programmer) as possible.

I spent sometime looking at message exchange via some web technology, looked at named pipes, shared memory, and even gave up on the connectionless ideal and went for a network topology of TCP/IP Connections... None came up as acceptable to my needs, they were either so complex in code as to need very significant amounts of testing, or they brought into my code base so many external files as to swamp the project and blur the line between multiple licenses.

However, late in the day I stumbled; and I feel a fool for stumbling so late; over boost::signals.  Initially I ran through their simple tutorial and was happy to see a function being invoked after connection from another location.

After a little thinking I also stumbled over a pattern of thinking via which I was able to replace my asio use of UDP as a message transporter to send a message to multiple locations through my code.


My pattern is to have one class control a copy of the signal, and to pass pointers to this to each outlying class, these outlying classes can then connect to receive the signal into local functions bound with boost::bind and viola... A signal sent to the central, and only, copy of the signal can be propagated extremely quickly throughout the other classes.

Here is my code:


#include <iostream>
#include <string>

#include <boost/signal.hpp>
#include <boost/bind.hpp>


using namespace std;

typedef boost::signal<void(const string&)> Func;
typedef Func* FuncPtr;

class A
{
private:

FuncPtr m_Signal;

boost::signals::connection m_Connection;

void OnSignal (const string& p_Message)
{
cout << "A::OnSignal [" << p_Message << "]" << endl;
}


public:

A (FuncPtr p_Signal)
:
m_Signal (p_Signal)
{
m_Connection = m_Signal->connect(boost::bind(&A::OnSignal, this, _1));
}

~A ()
{
m_Signal->disconnect(m_Connection);
m_Signal = nullptr;
}

};

class B
{
private:

Func m_Signal;

public:

B ()
{
}

FuncPtr Getsignal()
{
return &m_Signal;
}

void Transmit (const string& p_Message)
{
m_Signal(p_Message);
}

};


int main ()
{
B* b = new B();
A* a = new A(b->Getsignal());

b->Transmit("Hello World");

delete a;
delete b;

cin.get();
return 0;
}


This easily provides a simple framework upon which I can wrap a single instance of a signal, or different signals into a single class, in this example "B" and then pass out the pointer to that instance, in this example to "A".  And then the transmit is sent to all listeners.

One can register multiple instances of the "A" class and the transmit of the call goes to all the handlers for it.

I believe I may be able to turn what I had represented as an Event raised from many places, into a single Signal instantiated in one location.

I admit I will end up with multiple signals, one for each Event I had before, but the transmission system drops to zero lines of code, and all the lines of code I am using in the boost library have been very very well tested, unlike the code one would employ from ones own hand...

My new task therefore is to turn this reverse concept of a message to better meet my implementation of a state machine.

Saturday 21 July 2012

Working Alone Together

I've just gotten myself through a hell of a problem, you know one of those things you don't see coming, but which utterly brings your project to a halt.  My problem resulted in my rather large C++ application project creasing to compile...

I'll just take a moment to explain the problem, and solution, because I want to record it somewhere and as its past 1:10am and I've not got my notes near me, I'll just make a note here for the world and myself to read tomorrow.  The problem was including boost asio.hpp after you've included windows.h results in the boost socket type header reporting a compile time error that the WinSock API has been defined, where as the boost asio systems want to use the WinSock2 types.  The error is not very helpful, but the solution is simple, either go through all headers you've added before the one using asio and where ever you include windows.h you also ensure you have #define WIN32_LEAN_AND_MEAN.

Defining WIN32_LEAN_AND_MEAN results in the windows header file not automatically including Winsock.h, which in turn means later asio.hpp is able to include winsock2 without any conflicts of interest.

A neater solution, in any project using asio, is actually what I did.... Go into the compiler settings and define the WIN32_LEAN_AND_MEAN name in the preprocessor directives for the whole project you're building...

Your asio includes now work and compile, anything broken was using Winsock, or some other header excluded from Windows.h by the lean and mean setting, you can fix those yourself (*grin*).

This whole problem was very poorly understandable, certainly I consider myself very much a boost newbie, I'm not terrible with the libraries and I love them, but I find sometimes their requirements poorly documented... Indeed the boost asio library documents don't mention any conflicts with WinSock already having been included.. *sob*

The problem was also only compounded by the fact that the only thing I had added to the project was something which seemed to be wholly unrelated to asio, and utterly devoid of references to winsock.  It just happened to be including windows without the LEAN and MEAN directive...

And this brings me to the problems root cause, it was not really an effect of including said new item, nor was it really a problem with how asio enforces the use of WinSock2... It was actually caused by "Working Alone Together".

The code which was brought into the project was not actually tested as part of the project beforehand, it was developed alone, with the intention of moving itself into my project locally later, and being developed alone it compiled fine, it passes its NUnit tests, it matches its design and specification... This resulted in a very dire situation my end.

If this were a few years ago (and not many) I'd have been roaring at the supplier of the new item, complaining and urging them that they'd made some fatal mistake.  I'd have been ranting about how bad their work is, or how terrible they've got things working.  And all of us whom work in Software know what we do when our code gets criticised, that's right we get utterly defensive, we object to being put down, and we respond by pointing to measured results - in this case I'm sure I'd have had the passed NUnit tests as an insurmountable obstacle to my getting any of my grievances addressed.  Luckily I'm a little older and wiser, and I've been on the receiving end of such tirades.

Therefore I did the simple thing, I assumed I had something wrong and started to pick at the problem.  Admittedly this took me two hours to unpick, but I know now what is wrong, I know what the problem is, and I can comment my co-worker that their effort has not been for naught but ask them to look at porting to WinSock2.

I can also submit to them a new specification, or rather an update to my specification, stating that WinSock2 is the minimum requirement for the system.

And because I've not ranted, and I've pointed out all the foils involved, I've documented the problem clearly enough for them, they will most likely not throw their rattle out their pram, instead they'll just look at what's wrong and take my advice as that and get on with the work.

However, if our communication, and integration points had been closer, rather than twining these two items together at the deadline hour, then... well then we'd have seen this problem earlier... And its been a harsh lesson in this internet enabled age that really we needed to have worked more closely, rather than sticking to our own ring-fenced areas of the specification alone.  We spoke daily about the progress, we swapped notes on using new things as they came up, but this fundamental difference of libraries at the platform level has resulted in one almighty cock up.

A lesson learned, and a lesson to carry forward... "Don't Work Alone, when you should be working Together".

Monday 16 July 2012

ERm.. LXF....

I got my moaning mail about that chap totally missing the point of the Assembler Language series in LXF printed this month... So, to any/all LXF readers finding my blog... welcome... and... Sorry about the mess.

But, If I get a few hits on the C++ CPU posts I might make them into a dedicated webpage somewhere.

Wednesday 11 July 2012

SF stop buying Apple


I was looking at the news that San Fransisco was to stop buying Macs for local officials... Not that I considered this piece of information news worthy, but that a figure stuck me....

"It noted local officials spent $45,579 (£29,365) on Apple equipment in 2010."

The BBC state... Take a look at the Apple store... $999 for a MacBook Air... £1199 for a MacBook Pro... $600 just for a Mac Mini, and $2500 for a Mac Pro... So... on average, we can point a very rough finger of $1000 for a Mac...

Let us assume a generous 25% government discount, $750 per machine... This means San Franscico buys about 60 machines a year?  I think my employer, a much smaller, entity than the local authorities in SF... Buy more than 60 machines a year...

Their reason for not buying Macs any more is also something trivial, rather than think about the cost to their tax payers (because lets face it for $750 you can have a banging hot specification machine put together far in excess of the specification of a mac of the same price) but their reason is in fact because Apple, those Chinese Labour Sweat Shop Using Elegant Style bastards that they are, won't have a cat in hells chance of getting the Green Certificiation for their production methods for a Mac, all that industrial effluent pouring into the Yanghtze River does show up somewhere after all, so Apple are not going to bother.  But the SF liberal, Berkley corps, want to see them be green, so cut off their 60 machines a year order... Boo hoo says I.

Apple do have an interesting page about their Green Credentials... Which, despite using the colour green, only ever show seemingly low numbers... "We're 50% effective"... but its green, so it must be good enough!!?!?!?!... WOOT!

Monday 9 July 2012

Primitive Computer Vision


I'm working on a project at the moment, which see's me taking screen shots of the system (desktop as a whole) and looking for visual queues on the screen in order to seemingly smartly decided what to do, this is directly on the back of the graphics project I have at work, when something is going wrong, I want to capture the DirectX surface I'm rendering to, save it as a PNG or raw BMP and then process it for "visual queues".

The trouble is, the human eye is a hell of a lot more use than parsing through pixel data, which is pretty much what I'm reduced to at the moment.

I've been looking at implementing a Neural Network to recognise the colours, but Neural Nets have always been a bit of a dark art for me, even though I did two three month courses on AI at uni, Neural Networks only really came up in passing conversation, and then only for the lecturer to try to sell us on the book he'd just written the foreword for - a book I might add which I still have, and which is utterly useless at explaining the concepts at hand... Anyway....

I've also looked at fuzzy logic, and statistical comparison... At the moment I'm happy to stick to plain old comparison, with a tollerance range, so statistics... but I want something smarter to actually see subtler shades later...

That brings me to the other can of worms I've been dealing with, you'll maybe have seen the image processing I'd been doing in C++, I posted some images on this very blog a while back... Well... That was using CImg in Ubuntu, and it worked like a charm, no issues from the off and it just seamlessly worked... Just the way I like things.

Having had that experience I was hoping for just as easy an experience within Windows with CImg... Boy was I disappointed, it just never came together for me, I could not load Jpeg's... Even linking against libjpeg and even libjpeg-turbo didn't work... It was like Visual Studio had an axe to grind, and refused to help.

The awful kluge I've gone for is using the GDI+ Jpeg loader as a Gdiplus::Bitmap*, and then to save the image with the CLSID image/bmp back through Gdiplus::Image::Save.  Utter nightmare, really inefficient, using GDI... gah.. Hate it....

But CImg refused to load the jpeg... No matter what I did... 

As for my two earlier posts about a bug in G++... Well, its not a bug in G++, and its technically not a bug in the GNU STD C++ implementation, rather it seems to be a bug whereby the underlying C API does not meet its own specification.

A chat was very helpful, spent some of this time mailing me back about the problem at hand, but... He was more interested in saying that this was not a bug, but that my code had passed an "out of bounds value"... When I pointed out the C specification for the data type stated that my known value was within bounds, suddenly I stopped getting mails from him :(

The other thing I have done is sign up for Dell's project Sputnik... I'd be very very interested in seeing if they take my application seriously, not least because the XPS series of laptops were high in the running when I was speccing the machine I am writing this very blog post on.  The XPS machines I thought would seamlessly run Ubuntu, and that's the distro DELL have gone with... The reason I didn't get an XPS was because for the specification I'm sat on right now it costed out at over £1100 from them... Whilst I paid only £550 for it, on an off the shelf Intel Chasis... in black, rather than brushed aluminium.

That's not to knock Dell kit, I use Dell kit exclusively at work, have done for about eight years when I forcefully retired my HP desktop and specced a new Desktop Dell, both my home servers are older quad core Dell machines, and my old laptop is a trusty Inspiron 6400... There's a lot of Dell kit around here.

But their support for Ubuntu, even when listed as a Canonical partner was not very forth coming, indeed somewhere I have quotable transcriptions of chats with Dell customer service reps where they speak of not getting any other OS than Windows...

So, Sputnik, come my way, I'll put you through your paces, and rant, rave and be constructive along the way.

(Xel knows he's not going to get jack squat, but hey least I wrong it here).

Thursday 5 July 2012

LIb STD C++ 6 Bug

My post of late last night is actually pointing the finger of blame for that fault at MinGW, my reason for pointing my beady little finger in that direction is that the DLL given, causing the crash came from their repository.
 
But, in true computing style they don't handle bugs in that code, it is actually a third party product from GNU, libstdc++.
Taking a look at that site and I can't for the life of me see where to report this problem, as I point out on the bug I've posted on the MinGW tracker it might be an OS API bug, or it maybe something in MinGW, or even just something in this library... But I can't be sure, without that is, delving into the GNU code - not something I'd relish
Anyway, right now I'm looking for some bug list, mailing list, or tracker for the GNU libstdc++, before I go and start to pull my hair out.

The worst thing about this is that GNU is an open source project, it produces some great work, but unless you can work through the pages of dross which surround it, the cloud of obfuscating crap that, in my opinion, follows most all open source projects, without crawling past all that crap you can't easily contribute feedback like this bug.
I'm no rookie programmer, so it would be lovely to go look at this bug for them, but I can't easily do that.

Update
I've gotten the e-mail address of the maintainer for the "Active Issues" list regarding the library over at GNU, I got this from the top of their documentation about the library.
But once again, I can't be sure this poor person wants to know this trivial snippet of information, they'd certainly go mental getting dozens of mails a day from such a source of none-sense... But, I'm still unable to decipher where to send bug reports for the library :(

MinGW C++11 Chrono Library Bug

I've had a rollercoaster of a day, I was awake most of the night in agony, left off work (after calling the boss - or rather the wife did) and I went to see my GP, who put me on some really strong drugs to calm my bones, seems my heel bones are really not getting on with the rest of my body... Wierd...

Anyway, when we got back from this GP visit, I sat down to expand my mind a little in the direction of C++, I was specifically going down a check list of some work items I needed to optimise, and one of the things I was looking at was timing in the systems... I wanted to build on the very fast file system library I've written in C++ for use in C# with a library to very quickly handle times.  Not that the C# support for times is not good, but its not very useful for some of the more precise measurements I wanted to take.

So, after a bit of reading about POSIX time, and Windows API time, not to mention looking at Boost::chrono I started to check out if there were any additions to time handling in C++11, an obvious place to look, but strangely the last I went to.  Anyway, seems much of the boost::chrono library has been accepted as standard in C++ STL, and I quickly set about reading what I wanted...

My next problem was getting this all to work in some compiler, I tried out a bunch of things in g++ on the laptop and was happy enough under Ubuntu.  But I need this for work, that means Windows...

Introducing MinGW... I grabbed the latest get-install executable and set about installing it, then started to throw together some code, my first test program was to simply output the epoch, now, min and max time_points for the system clock, this is a standard bit of demo code, and I lift it straight from:

Click me to visit Amazon and buy this fab book.

Page 152 if you're interested... Anyway, I typed the code in and here is how it looks:


#include <string>
#include <iostream>
#include <chrono>
#include <ctime>

using namespace std;
using namespace std::chrono;

const string asString (const system_clock::time_point& p_TimePoint)
{
string l_timeString ("");
try
{
time_t l_time = system_clock::to_time_t(p_TimePoint);
l_timeString = ctime(&l_time);
if ( l_timeString.size() > 0 )
{
l_timeString.resize(l_timeString.size()-1);
}
}
catch (exception& l_ex)
{
cout << "Ex: " << l_ex.what() << endl;
}
return l_timeString;
}

void shoo ()
{
try
{
system_clock::time_point l_tp;
cout << "Epoch: " << asString(l_tp) << endl;

l_tp = system_clock::now();
cout << "Now: " << asString(l_tp) << endl;

l_tp = system_clock::time_point::min();
cout << "Min: " << asString(l_tp) << endl;
l_tp = system_clock::time_point::max();
cout << "Max: " << asString(l_tp) << endl;
}
catch (exception& l_ex)
{
cout << "Ex: " << l_ex.what() << endl;
}
}

int main ()
{
shoo();
cout << "Done, press ENTER" << endl;
cin.get();
return 0;
}

Next I set about compiling under windows...so with the code above saved as "C:\Code\Test.cpp" I get a command prompt and did this:

path=%path%;c:\MinGW\bin
cd \Code
g++ -x c++ -Wall -std=c++11 Test.cpp -o Test.exe

I know I need to copy a couple of dll's from the MinGW bin folder (libgcc_s_dw2-1.dll & libstdc++-6.dll) before I can run the application.  With them copied over I run the app... And... I get a crash...


I'm not using a debugger at the mo, just trying out this little bit of example code, so I'm unable to delve too deeply into what's going on... However, after a little trial and error from the four system clock calls I know its got to be one of them, and just narrowing it down I find out that using the generated "system_clock::time_point::min();" is throwing the spanner in the works.

The other calls all work when being passed through my code, just the minimum time_point does not?!?!?!

Checking around the internet, I find no-one else mentioning this, I see the use of the minimum time point a lot, and I see it in a lot of examples... But this crashes...

//l_tp = system_clock::time_point::min();
//cout << "Min: " << asString(l_tp) << endl;

Rem these two lines out, and rebuild... All works fine!


So, am I able to generate a min time point?...

auto l_tp = system_clock::time_point::min();


Yep, this works... so, this leaves my passing it to asString...

The only real call of any logical significance in there is "system_clock_to_time_t"... voila, that's the problem, passing the time_point min to that function results in a logic error exception... but why?... That value should be output easily enough, and represented as a time_t... all the references I have say it is so...

So guys and gals, passing your time_point::min to system_clock::to_time_t will crash on MinGW output applications running their current, as of the time of writing, C++11 libraries... :(

Monday 2 July 2012

STL Iterator versus Auto (C++)


I've had an interesting time with a piece of C++ overnight, I've been using the std vector class to hold a set of values, let say integers, and I've been interating through the contents of the vector to give me the list when I need to go through it.. I've had no issues with this, here's my code:

#include <vector>
#include <iostream>

using namespace std;

void foo ()
{
vector<int> nums;
nums.push_back(1);
nums.push_back(2);
nums.push_back(3);
for (vector<int>::const_iterator itr = nums.begin(); 
itr != nums.end(); 
++itr)
{
cout << (*itr) << endl;
}
}

This has worked fine, been fast, and easy to comprehend... But I let someone else review my code, just a snippet, and they came up with this rather bolshy "you must use an auto" comment and changed the code, thus:

for (auto pos = nums.begin(), end = nums.end();
pos != end; 
++pos)
{
cout << (*pos) << endl;
}
I took this on the chin, never complained, and had a look at it, the code certainly worked, just as it always had... So I immediately thought, is this another case of change for changes sake... something which has been rearing its ugly head in my direction time and again for weeks; months in places...

I started to do some timings, here's the complete timing code:

void autoUseFoo()
{
    vector<int> l_vector = DefaultVector();
    int l_timeA, l_timeB;
    timespec l_StartTime, l_EndTime;

    int i = 0;
    while ( i < 100 )
    {
        l_timeA = clock_gettime(CLOCK_THREAD_CPUTIME_ID, &l_StartTime);

        for (auto pos = l_vector.begin(), end = l_vector.end();
                  pos != end; 
                  ++pos)
        {
            cout << (*pos) << endl;
        }

        l_timeB = clock_gettime(CLOCK_THREAD_CPUTIME_ID, &l_EndTime);

        OutputTime (c_Auto, l_StartTime, l_EndTime);
        i++;
    }
}

void autoIteratorFoo()
{
    vector<int> l_vector = DefaultVector();
    int l_timeA, l_timeB;
    timespec l_StartTime, l_EndTime;

    int i = 0;
    while ( i < 100 )
    {
        l_timeA = clock_gettime(CLOCK_THREAD_CPUTIME_ID, &l_StartTime);

        for (vector<int>::const_iterator pos = l_vector.begin();
                  pos != l_vector.end(); 
                  ++pos)
        {
            cout << (*pos) << endl;
        }

        l_timeB = clock_gettime(CLOCK_THREAD_CPUTIME_ID, &l_EndTime);

        OutputTime (c_Itr, l_StartTime, l_EndTime);
        i++;
    }
}

As you can see, I clock the time before and the time after the loop, one using a const_iterator, the other using an auto-range... My argument being that because in this isntance I don't need to change the items being output, to use the auto results in needing to go through a construction (of the auto object) unneccesarily, slowing my code.

I produced this code, and I came up with exactly what I expected, auto results show a large spike of over 12000ns to set up, and then approximately 4300ns to go through the vector... Whilst the const iterator shows no spike and performs the loop through in an approximate average of 4200ns... That, to me, tells me my code is faster, its also in my opinion easier to read, and more easily changed to support better casting when we use complex objects, say to call functions... Especially is one typedefs the ugly "vector<int>::const_iterator" type to somethig more legible.

I showed my retort, I infact have written this up in my project notes, and given reasoning and code examples as well as timings taken over 10, 100 and 100,000 iterations of the while loops shown above, each and everytime the iterator is faster than the auto... Yet this person refused to listen, refused to even look at my reply... Their input "auto is right"...

This is code, there is no right or wrong, they are just different ways to get to the same conclusion, just that my solution results in quicker code and hence a slightly better user experience... and I accept that its only 100 nanoseconds on my timings, plus the spike, but if I were willing to accept that today, what would I accept tomorrow?... How much worse could the user experience be?