Sunday, 31 December 2017

Using Flash Drives in ZFS Mirror

This post comes from an idea I had to allow me to easily carry a ZFS mirror away from a site and back again, we didn't need much space - only 5gb - but it had to be mirrored in triplicate, one copy to stay locally, one going into a fire safe on site and the third to be carried by the IT manager off-site each evening.

The trouble?  A near zero budget, so for a little over £45 we have a 14GB ZFS mirrored pool, across three 16 GB USB Flash drives and one three port USB 3.0 hub.

It was perfect for the task at hand, extremely portable, and cheap, I thought the same approach may help anyone trying to get to learn a little more about ZFS, a student or even someone using a laptop as a small office server - as the laptop literally has its own battery back-up system built in!

It's not the fastest solution, its in fact extremely slow, but as an entry step it's perfect.

See the full video below, throughout the commands I list were in use...



Commands:

Listing Disks by ID...

ls /dev/disk/by-id

Listing Disks to a file for use in a file script as you see me using...

ls /dev/disk/by-id -1 > disks.txt

------------------

To install ZFS on Debian/Ubuntu linux:

sudo apt-get install zfsutils-linux

------------------

To remove & purge ZFS from your system:

sudo apt-get purge zfsutils-linux

(and you will be left with "/etc/zfs/pool.cache" to remove or back up yourself).

------------------

Command to create the pool...

sudo zpool create <Name> mirror <DiskId1> <DiskId2> etc...

The name we had here was "tank", if you already have data on these disks you will need to add "-f" to force this change through.

------------------

Command to make a file executable - like our sh script:

sudo chmod +x <filename>

------------------

Zpool Commands:

sudo zpool status

sudo zpool import <name>

sudo zpool scrub <name>

sudo zpool clear <name>


You will want to "import" if you completely remove ZFS or move one of your sticks to a new machine etc, simply insert the disk and import the pool by name.

Scrub will be used whenever you return a disk to the pool, remember the point here is to allow you to replicate the data across the three sticks and be able to remove one or two to safe keeping, be that an overnight fire safe, or taking a physical copy with oneself.

Clear is used to remove any errors such as the Pool becoming locked out for writing - which it may if a drive, or all drives are removed - you simply clear the current problem with any pool.


Summary:  Remember this is NOT the optimum way to run ZFS, this is actually extremely slow, you are replicating each write over your USB, you can only cache so much in the RAM, but it is not a performance piece, this is about ensuring one replicates data for safe keeping, a small office or your dorm room server setup could be completely provided by a laptop in this manner, it has it's own battery backup, it is quite (if you get the right machine) and really this is a very cheap way to play with ZFS before you move onto other bigger hardware options.  Plus, I find the best way to learn about technology is to break it, even a little, and so constantly breaking down your pools by pulling USB sticks out of them is an excellent opener to recovering your pools.  Play about first, don't put anything critical on there until you're really happy with the results.

For an excellent post covering creating ZFS pools, cheak out programaster's post here: http://blog.programster.org/zfs-create-disk-pools

And for the official ZFS documentation you can check things out with oracle here: https://docs.oracle.com/cd/E26505_01/html/E37384/toc.html


Oh, and Happy New Year... I guess I made it to 100 posts this year...

Saturday, 30 December 2017

99....

I just noticed that I'm at a count of 98 posts for 2017, so this is just a post to give me 99.... The next might be more interesting... But not by much.

Friday, 29 December 2017

Windows Defrag v KingDian SSD... FAIL

I'm sure you all already know this one, and many vendor specific implementations of SSD drivers/applications prevent this... Not me though.

Yes, I have a dirty cheap (£27) SSD from KingDian in my Laptop... Wait, wait wait, stop abusing me!  This is just a boot disk to hold a local OS and some scratch space, all my storage is provided to this machine over NFS.
The trouble?  Well, in the late summer the wife and I had a few days away in deepest darkest Wales... Now this was a disconnected break, however, I couldn't resist taking the laptop loaded down with a few DVD based games and a steam download or two, plus I got to take some system code with me and a slew of PDF's I could read... I got a lot of work done, even without Linux (yes I had to flatten the machine back to Windows 7).

And I pretty much put the machine away until just before Christmas, I then needed it again, so flipped it out the cabinet and fired it up, only to immediately find the battery was dead... It had been dying, I rely on a meager source of dodgy, second party batteries from China... You know the kind of Lithium cells you'd not want to leave alone too long.

But being Christmas I was skint, so I've had to wait until the end of the month to get a new battery... Which arrived today, I left it to fire up and take it's first charge, everything looked fine, it was still Windows from the holiday, so I left it to it and I've been and enjoyed some TV...

Upon coming back up however, the machine is mental... Telling me I have no boot device, I had left the machine on... It has no network connection, so it can't have downloaded a Windows update and restarted... I was a little baffled... But fear not, I'm a trained professional...


This got us into a Windows Recovery, and after not giving a shit and starting normally, back into windows, where I started to puzzle out the problem....

Event Viewer, and Applications... Nothing but the panic about the non-clean reboot... System... Hmm... System shows the last event at 16:54 and then the reboot at 23:54 (alright, don't judge, I watched three films okay, and we had a take-away!)...

 So what was the event at 16:54... "Service Control Manager".... Hmm, which service.... "The Disk Defragmenter service entered the running state"...

OH SHIT... This is an SSD, you really shouldn't defragment them (it reduces their life)... So it tried to defrag and not only crashed windows, but made the device/bios not able to inter operate upon a warm restart.  That's quite a feat.

A hot reboot sorted everything, so I'll not dig any further, however, I'm interested in this phenomenon.  The disk is a cheap chuck away one, so I start the defrag manually... BOOM, crash, same thing.

I trawl through my box of bits and find an old OCZ 128GB SSD (which has about 26% life remaining - lol) and I throw Windows 7 onto this... Let it all set up and I come back (it's about 23:35 now, if anyone's counting).  Starting the defragger on the OCZ device, no problems.... It just does nothing, the management software from OCZ stops any nastiness.



Back to the KingDian, boot up.. .and Defrag... BOOM.

It's definitely something about the very cheap KingDian drive... Any ideas?  Drop it into the comments, but its back to Arch for this machine me thinks.

Wednesday, 27 December 2017

C/C++ Stop Miss using inline.... PLEASE!

This is a plea, from the bottom of my rotten black heart, please... Please... PLEASE stop miss using the inline directive in your C and C++.

Now, I can't blame you for this, I remember back in the 90's being actually taught (at degree level) "use inline to make a function faster", and this old lie still bites today.

inline does not make your function faster, it simply forces the compiler to insert "inline" another copy of the same code whenever you call it, so this code:

#include <iostream>

inline void Hello()

{
    std::cout << "Hello";

}

int main ()
{
    Hello();
    Hello();
    Hello();
}

Turns into the effective output code of:

int main ()

{
    std::cout << "Hello";

    std::cout << "Hello";
    std::cout << "Hello";
}

What does this mean in practice?  Well, you saves yourself a JMP into the function, and the position on the stack holding the return address, and the RET from the function which pops off the stack and returns from the function.

This is WHY people were told to use inline to make things faster in the 90's, I was taught this when I a system with around 254K of working RAM for the programs I was writing, saving that space on an 8K stack was important in complex systems, especially if you were nesting loops of calls.

However, today, on a modern processor, even modern embedded processors, DO NOT DO THIS!

You're no longer saving anything, you're in fact making your code bigger and slower as suddenly your program expands in size and you are having to fetch more and more from the slower RAM layers rather than the program instructions page fitting into the lower CACHE layers.

As you get page misses you fetch more, you literally stop the program and switch context to another item and then switch back, literally halting your program in its tracks as it suddenly had to go load the N'th of possibly thousands of repeated stanza's of code.

Don't do, this, don't lumbar yourself, let the compiler handle it's own optimizations, they're pretty good at it!

Now some of you will be saying "yeah, no shit Xel, what's your point?"... My point is I recently had around 4000 lines of code handed to me, a huge long listing, and around 40% of it was a series of functions.  This whole thing could compile down to around 62K.... But when compiled it was just over 113K... This was too big to fit into the memory of the micro-controller it was for.

The developer had been working merrily over the yule tide, happy and satisfied their code would work, they went to work this morning and instead of running the code on the IDE within an emulator, they actually ran it on the metal.

It crashed, and they couldn't figure out why, the size was why.

And then they couldn't work out why the code was so big... It is tiny code.

They came, cap in hand, to myself - and I took no small satisfaction in rolling my eyes and telling them to remove the "inline" from EVERY function... "But it'll run so slowly" they decried... "REMOVE THE INLINE".

Of course it works, they have the system fitting into the micro-controller RAM, the stack is working a lot harder, their code is a lot smaller, and they are now in possession of a more balanced opine on "inline".

* EDIT *

One person, yes hello Hank, asked me "why", why was this a not a problem on the emulator, but was a problem on the bare metal, well the bare metal was using a different compiler than the pseudo compiler for the windows based IDE, the Windows based IDE was actually running the code through a compiler which ignored "inline", and so produced code a little like this:

(Image Courtesy "CompilerExplorer")

You can see that even though "int square(int)" is marked "inline" it contains the push to the stack and the "pop ret" pairing, and making it a call from main results in two function calls to the same assembler.

The bare metal compiler did not, an undocumented difference I might add.

Saturday, 16 December 2017

C++ : The unrandom random number...

I've been working in some C++, with boost to be precise, the machine I'm working towards finally has a processor with SSE3 in it, and so I've been to revisit the GUID generation code, boost specifies a couple of defines you can set up before incluiding the uuids header to help...

#include <iostream>
#ifndef BOOST_UUID_USE_SSE3
#define BOOST_UUID_USE_SSE3
#endif
#include <boost/uuid/uuid.hpp>
#include <boost/uuid/uuid_io.hpp>
#include <boost/uuid/uuid_generators.hpp>
#include <boost/lexical_cast.hpp>

const std::string GetGuid();

int main ()
{
for (unsigned int i(0);
i < 100000;
++i)
{
std::cout << GetGuid() << "\r\n";
}
}


const std::string GetGuid()
{
boost::uuids::uuid l_guid =
boost::uuids::random_generator()();
return boost::lexical_cast<std::string>(l_guid);
}

This code looks fairly innocuous, "GetGuid" is the key part, you may argue that you're always setting the random number generator up, each and every call, but the output is fairly simple, this is only a test....




However, if we look carefully, there is always one column the same, when running on the screen this is very obvious...


Generating hundreds of thousands, taking minutes and minutes hasn't changed that one character.

Why this is isn't clear to me, I need to do some more digging.  I'm going to hazzard a guess it's that we construct and release the number generator each pass, we should perhaps instantiate one and keep it, so the sequence of randomness is preserved.

Any suggestions?  Hit the comments below!




P.S. Yes, I know I've not used RAII with the l_guid assignment there, but I'm in a hurry and only just noticed.

Thursday, 14 December 2017

Manifold Garden - Chyr's Update

I've previously mentioned William Chyrs work of art that is Manifold Garden in previous posts, however, he's just released a development update to the world that the game is slightly behind schedule, but he is hopeful of an early 2018 release.


You hear more from William himself on YouTube below:


Or you can get the low-down via the Steam app entry here.

I'm sure, if you're anything like me, you'll still see this amazing development as worthy of your attention.

Enjoy!

Wednesday, 13 December 2017

Virgin Media Cable to Wet String Maybe?

It has been just under a month since I started to measure my internet connection speed, I've been paying for 50Mbit, and receiving pittifully less at all times of the day, and huge dips during what is tabbed the "peak time".  We get massive slow-downs whilst streaming - constantly - opening say ITV player and then opening a web-site for wikipedia totally freezes the player until the whole wikipedia page has loaded - remember wikipedia is mainly text, there's very little media data being exchanged, but the player is just cut off; it's dreadful.

There's no reason for this, when I was paying for 200Mbit I was receiving around 33-36MBit at all times, so deciding to pay for only 50Mbit I was to save money and still get this speed I had had - since I never ever got more...

But it seems this is totally beyond Virgin Media, they're playing speed throttling shenanigans....

However, the clever chaps at Andrews and Arnold engineering might just have the solution for me... Wet string...

That's right, they're sending internet data, so that I presume is TCP/IP with the string acting as a conduit for the ADSL signal... Impressive, more impressive than my paid for cable service!

Monday, 11 December 2017

Crashed my Build Server....

When I say I crashed it, I mean... It just locked up and I had to soft reboot it... And when I say build server I mean one of the virtual machines on one of my Xen Hosts....

So, the machine is a fairly beefy 16 core machine with 48GB of RAM running on a Dell server under the desk, this disk base is a RAID-0 200GB unit over a bunch of 2.5" 10,000 RPM SAS Drives to a Perci5 RAID Controller....

The machine is only really spooled up for big builds, and this was one of them, I wanted to build LLVM support before bed.

The problem?  When I performed the build with "make -j all" it would get to 16% and then blank the screen, and totally lock up, nothing, nada, nowt... I left it for a while but nothing happened, and yes the LLVM build is time consuming but it doesn't lock at 16% for minutes.

Soft reboot, and the same happened again...


I've started the build again with "make -j 15" rather than all sixteen cores.  And it's already up into the 35% area of the build whilst I've been typing this....

But, what the heck locked the machine up before?  It wasn't actually using all the processors all of the time, surely?  Maybe?

I might have to set up one of the older 2950's and have a play about with this, leaving my one beefy machine alone.

Just on a side note, could you imagine the mess your system would be left in if you soft rebooted this kind of kit, mid-build, with no warning... LOL.

Saturday, 9 December 2017

Start C++?

I've been writing C++11 or better code, well since 2010, as I started doing so with the TR1 as was.  We're nearing eight years since then, and it starts to show.

So, where would I recommend starting to learn modern C++?  Well, if you've never programmed before, don't start by learning C or C++, go learn Pascal, or Python, or something else which is more friendly.  I started out in Pascal, for a good three years, before I started to work in C and later moved into C++ (in 1998) so if 20 years of C++ teach me anything, it's don't try to learn if as your first language.

Once you have a concept of how to program, then start to learn C++, and I would recommend finding someone - hopefully like me - and asking them.  An hours chat with them, to help swap what you know with what they know, is a good start.

Saving any such friends, YouTube, watch CPPCon, BoostCon, watch talks about programming C++ but most importantly get a development environment and cut some code, if you must go for a community edition of Visual Studio 2017, but otherwise get a Linux machine up and running and have a play.

If you work in an office where there's a C++ guru, ask them, if they're worth their salt they'll be more than happy to take you through a few things.

And failing that, I include a set of programs below, one through three, these are the most simplistic C++ programs I would recommend you start off working with, and if you want to know more comment below.


So, open an editor, and write this C++ code, then save the file as "main.cpp":

#include <iostream>

int main ()
{
   std::cout << "Hello World";

}


If you are in Visual Studio, you run that file, if you're in linux close the editor after saving and lets use the gnu g++ compiler (install it in Ubuntu say with "sudo apt install g++"), and you compile this into a program like so:

g++ main.cpp -o example1

Your program output will be called "example1", and you can run that program, and it'll say "Hello World".

The code itself I want to only explain the first line... the include, this tells the compiler to include some code for you to use, and "iostream" is the input/output streaming functions for you.

int main is the first function the program starts to run from, int is the return type, meaning integer, but in C++ we don't need to return a value here - if anyone argues with you about this, they're wrong.  The name "main" is the a special name and the compiler makes sure your program starts with this function entry point.  The empty brackets show we're not passing anything into the function, there are no parameters.

The braces mark out the body of code for the main function, so it starts with an open brace, contains lines of code and then ends with a close brace, and yes I call them "braces" not "curly brackets" :)

The one and only line of code we've got left is the output call, this is streaming the value on the right of the "<<" chevrons to the standard character output stream... "std::cout"... I'll reiterate 'standard character output stream".   And the value we're streaming is a string (note the quotes in the code) "Hello World".

This is the only line of code with the all important semicolon ending, this is used to tell the compiler that we're done with out line of code.

Go, try this...


The next most basic program for C++ noobies to learn, is to maybe output some more kinds of data.... After say, asking the user something...We've seen cout, how about "standard character input"?...

#include <iostream>
#include <string>

int main ()
{
    std::cout << "What is your name? ";
    std::string name;
    std::cin >> name;
    std::cout << "\r\n";
    std::cout << "Oh Hi " << name;
}

We're introducing a new header, the "string" header helps us store strings of characters in the type "standard string"... "std::string"... Did you spot that?... This is our first variable, and it has the name "name" so we can refer to it later.

As ask the user a question by sending characters out "<<" to the output stream, and then we read them back in ">>" from the character input stream, and we read them into the "name" variable we just created.

Next we output a carriage return "\r" to move our carat back to the left of our screen, and we move to the next line "\n".

And finally we output a piece of text AND our input variable!

You can play with this code with other variable types, other than string... Try "int" to read in a whole number, of "float" to read in a floating point number.


But our third program, will involve some actual processing.... Lets average the age of a set of people we ask the user to input...

#include <iostream>

int main ()
{
    int TotalAge (0);

    int age;
    std::cout << "Enter the name of person 1: ";

    std::cout >> age;
    TotalAge = TotalAge + age;
    std::cout << "\r\n";

    std::cout << "Enter the name of person 2: ";
    std::cout >> age;
    TotalAge = TotalAge + age;
    std::cout << "\r\n";
    
    std::cout << "Enter the name of person 3: ";
    std::cout >> age;
    TotalAge = TotalAge + age;
    std::cout << "\r\n";


    float Result = TotalAge / 3;

    std::cout << "The average Age is: " << Result
}

Here we have much more code, we ask for each persons age, adding it to the total, then we calculate the resulting average age and print it out.  Take a moment to look at this....

What do you see?  If the first thing you see is that there are three sets of the same code in there, then you have the seed of a programming within.  If however you just see a mess, then maybe C++ isn't for you.


Monday, 4 December 2017

Great Rack Mount Mistakes #5

It has been a while since I've brought to you the tales of woe from my past... But this one isn't a tale of woe for myself, it was some other poor bugger who had to suffer, though I was involved.

After my first in-depth IT related job, I got into looking after some big systems, and I mean so Big they could have starred Tom Hanks... The last one of which ended, officially in early 2001, this was my looking after an IBM AS400 machine.

It had several terminals hooked into it, many suited analysts (as the non-programmers were called) regally sipped coffee and generated reports from it, there were also several ASCII Serial wireless hand-held terminals for roaming about the site with, all pretty cool.  I however was not involved in any of this, my job was to look after the PC's on the site and keep the AS400 fed with back-up tapes.

One of the PC's however took me within a solar breath of the chorona of glory that was working with the BIG IRON, and this was a little IBM PC, running OS/2 which was used actually boot the AS400, the mechanism escapes me, the details I've long forgotten, I remember it using OS/2, a terminal and a fancy script.

As I said this role ended for me in 2001, and I whisked my way off to work for a little software shop in Alcester, Warwickshhire (where I know I was a lazy pain in the arse - sorry about that lads - I grew up later, honest!).  Anyway, 23rd December 2001, I had a call at my parents home, a chap asking for me by name.  They handed out my personal mobile (Rocking the Nokia 3310 on Genie Mobile).

Well, this guy didn't let up, Christmas Eve, I'm mid-way through watching the Muppet Christmas Carol for the fifth time that day, and I finally look at the phone, and the seventeen texts to call the head-office of my prior employer.

Which I do, and awake a security guard whom had less hold of English than my dog, and the dog's Greek...

After a mixed conversation, I got through to a chap who was clearly in a server room, you could hear the noise behind him, I love that noise.

As he's talking to me however, the noise disappeared, dead silent...

"Did you just leave the server room?"

His reply.. "No, it just shuts off, it never completes its boot".

He was talking about the AS400, of which I knew nothing, they had very expensive IBM support for it on the way, but they were very worried, and wanted me to take a look, as everything had to be up for the Boxing Day sales - this was my time not in a manufacturing world, but in a retail world - the pressure was real, the target was live.

On the offer of a very nice cash sum, I jumped in my 206 and drove down to the offices, waved through security I signed in, and took a look around my old stomping ground, things had changed since I was last there, the partitioning wall to the server room had been removed, where the analysts sat was now occupied by a modern style series of server rack positions, they were in the process of moving everything to a set of Dell Power Edge 2U servers, with A/C, a hot isle and a cold, some decent kit.

Turning around there was the great big AS400, jet black, with a rectangular base rounded at one end.  And the machine perched on this raised platform.



This was the machine which would not booting... The problem?

Well, that paritioning wall, which had been removed... "How did you remove this wall?"

"Oh" he said "we had them put plastic sheets double lined from the ceiling to floor, took down the stud walling, and clean up, we never had to turn the AS400 off"

"Fabulous" I noted his pride "so the silver racks which were here, the shelving with parts and the little PC sat about here"  I intimated the corner just below waist height.

"All that was removed, with the wall, just old junk parts and pieces"

"Okay" I look around "So where did you remove it all to?"

"A skip" he shrugged "About three months ago"


"Aha" I nodded "I know your problem, the AS400 is coming up into advisory mode and awaiting the start signals from the script host, I'm going to guess you don't run AS400 elsewhere, you're winding down to the new servers?"


"Yeah" he was quite flustered "I know nothing about this hunk of junk, I just need it to work"

"Then you need to find a PC running OS/2 before morning, and restore from one of the back-ups I used to take onto tape, if you can"

He went whiter than my best linen on wash day... "OS2 PC?  Why?"

"Because the machine which sat here, was the boot master for the main shell into the Analysts layer, it sat here" indicating the wall again "whoever unplugged and threw it in a skip should really have looked a the holistic picture, it was a very important little machine, which is why we had two of them and spare parts on those shelves, and why it also got backed up to tape when it was installed or updates performed on it"

Silence filled the gap.

"I'll take that cash and get back to my Christmas pudding".




Thursday, 30 November 2017

C++ : Trust the STL

One sore lesson to teach some developers is when to trust the compiler, once you've gotten that across you have to start teaching folks to stop re-inventing the wheel.

If someone has already implemented a file handler, or a serial port abstraction, or a wrapper for some obscure feature, you need to evaluate that offering...

To evaluate whether a library is worth using, firstly see if it works, then see how many folks actually use it, the more that use it then the more likely bugs will be flushed out and the whole thing has been tested.

Leveraging this kind of mature code within your projects assists in bootstrapping the startup phase of new projects.

Boost is a note worthy example of what I'm talking about here, many software shops (at least the ones I know) resist using open-source or third party libraries, they prefer to stick to in-house developed niche implementations until the very last moment, this of course slows development and completely symies innovation.

Boost however is one step further than the problem I'm going to tackle today... The Standard Template Library...

The STL is often commented upon negatively, this is despite it being a hugely available resource, vastly and deeply tested throughout and constantly incorporating new innovations.  Whole books have been written on the topic, and yet one can still find projects and individuals resisting using the STL.

STL nay-sayers will quote "no need for an STL requirement", "uses less memory than an STL implementation" or "faster than the STL"...

The problem with this attitude is, are such attitudes going to sufficiently tackle testing of their bespoke solution, is that bespoke solution going to be as robust or as easily maintained as something using the STL?

Probably not, and this is a hard one for die hard "purist" developers to swallow, we want to write all our own code, we want to be gods in our domain, the trouble is for the vast number of us, god has already been there and he wrote a decent enough library to do the task we need doing... So leverage this!

I came across one such niche item the other day, with an algorithm to see if a string starts with...

They hadn't used boost, or the STL, to do the searching, yet perversely had used an std::string... Their code, looked a little like this:

const bool StartsWith(
const std::string& p_Text,
const std::string& p_Pattern)
{
bool l_result(true);

if ( p_Text.length() >= p_Pattern.length() )
{
for (unsigned int i(0);
i < p_Pattern.length();
++i)
{
if ( p_Text[i] != p_Pattern[i] )
{
l_result = false;
break;
}
}
}
else
{
l_result = false;
}

return l_result;
}

It is fairly logical code, they're looking at the length of the presented parameters, to avoid looping when not required, then they only loop from the start and only return a fail when the character is a miss-match, looking at this with programming eyes from 1996, I'd say this is fine.

Looking with eyes well aware of the STL, I cringe a little, and replaced this whole function like this...

const bool StartsWith(
const std::string& p_Text,
const std::string& p_Pattern)
{
return (p_Text.find(p_Pattern) == 0);
}

One line, of very much more maintainable, vastly more readable and easy to comprehend code...

The developer of the original however was not happy... "you're wasting resources, this will find any instance and tell you the input".... he's right it will, but the STL will still be faster than his code.

I demonstrated this by plugging both into CompilerExplorer... He still refused to listen.

Therefore, I've written this little helper project, to run the two functions side by side, threaded three tests, looking for the match, a long match and a negative match at the start of the string (Code on Github).


The results of this are interesting, you see the project itself favours cases where it's highly likely the string being searched for is present and therefore we don't need to worry too much about the odd test not finding a match taking longer... This is exactly the behaviour seen in the STL based find example.


The Short search time, for the same data, on the same processor went from 28358 microseconds to just 5234... That's about 81% faster.  The longer search is more stark, falling from 185966 microseconds to just 6884, just over 96% faster!

The rub is the negative case took longer, rising from 19765 in the hand-crafted search to 25695, just over 30% slower.  Some of this increase can be explained perhaps by the hand-crafted version using the lengths to quickly skip too short an input, otherwise it is simply that the STL find has to iterate over the whole string when no match is found.  A hybrid to not perform the find at all, when there is insufficient data maybe in order; however this may add to our maintenance burden and lower code clarity, swings and roundabouts.

However, clearly in the case of this project, dismissing the STL resulted in slower code, we have a system propensity for matches, they're quite short, and all target platforms have the STL built in, use it.

Never be affraid to ask questions of what you're working with, ever.

Tuesday, 28 November 2017

CMake rather than Mammoth makefile marathons

I'm having difficulty communicating with some folks about the beauty of cmake and using ccmake to leverage that beauty.

These are folks whom are either completely ignorant of what a makefile should look like, are happy to manage their own or at worst case are folks put off of makefiles by having inherited projects which have spiralled out of control with mammoth makefiles and a propensity to being so complex as to prevent any cost-effective entry grade for new developers to key into - i.e. they're too hard to learn, or obfuscated sufficiently to allow established developers to retain their positions of glory and power.

I don't subscribe to that ethos however, and believe that as a leader in development you should facilitate everyone to be being able to do everyone else's development role, be that starting a new project or continuing an old.

It perhaps comes from my being able to work alone and defining a role which others are then keyed into, I have been forced to allow entry to my work, to make the cost of someone else bootstrapping my work into their wetware (brain) as simple as possible.

Mammoth makefile marathons are not the way for me to do that, a CMakeLists.txt file, now that's a better proposition.  However, even here you have to take care, some folks are ignorant of the tools available, to leverage cmake in this way one might use this kind of command line...


This is daunting for a newbie, and even an experienced developer has to admit...

ccmake <PATH_TO_CMAKELISTS>

This is a much more succinct and easy to access way of getting into your CMake way of working, the cost of entry being so low as to actually make introducing new developers to Linux development, or just general CMake usage, trivial.

So where am I failing to communicate this?

Well, cmake, ccmake... The naming conventions of both are so close as to confuse people, they don't hear the second C in CCmake, or they think I'm talking about the C Programming language.  This is a lack of understanding on their part, people being people however, they don't want to admit they've no idea what you're talking about.

(As an aside, folks, if you want to be a good developer, a good person, please admit when you don't know something, it causes so much less issues in at development scrum time if you are handed work, and you simply state "I know nothing about that".  Someone else can be assigned the role, or better still, you can get training and schedule the work more effectively!)

My solution to this difficulty therefore?

Rename the programs, I've created two symbolic links in /usr/bin....

sudo ln -s /usr/local/cmake/bin/ccmake /usr/bin/makefile_prep_gui
sudo ln -s /usr/local/cmake/bin/cmake /usr/bin/makefile_prep_cmd

I've essentially bamboozled the communication factor by giving cmake the working name "makefile_prep", this means that those opposed to ceasing direct use of makefiles still feel empowered, but are subtly diverted to using an automated tool.

Immediately questions and opposite to changing the status quo has ceased, and folks are talking about using the new "makefile_prep" tools.... How clean they are, how nice the builds look, how they integrate with CLion easily, and "the output from my makefile_prep looks exactly like the build going on inside the IDE (CLion)"... Little do they realise they're both cmake!

Oiling the cogs of resistance to change, this is where I'm living at present... Its not an easy task, but sometimes it's rewarding.... Now to do the same in the day-office.