Monday, 31 October 2016

Software Engineering : "warning: defaulted and deleted functions only available with -std=c++11"

warning: defaulted and deleted functions only available with -std=c++11 or -std=gnu++11

When you're using std=c++14??!?!?!  What the heck is going on here?

I've noticed this strange bit of behaviour with g++ and warning when compiling with multiple cores... I've stripped this example back to the bare minimum, so lets just define what we're building and then we'll look at the strange warning which comes out, first of all, we need to compile two files, completely independent of one another:

Main.cpp

#include <iostream>

int main ()
{
std::cout << "Hello World" << std::endl;
}

This is our first file, the other has to be a class:

Data.h

#ifndef DATA_HEADER
#define DATA_HEADER

#include <string>

namespace Xelous
{
class Data
{
private:
std::string m_Data;
public:
Data(const std::string& p_Data);
const std::string& GetData() const;
};
}

#endif

And the code for this class looks like this:

#include "Data.h"

namespace Xelous
{
Data::Data(const std::string& p_Data)
:
m_Data(p_Data)
{
}
const std::string& Data::GetData() const
{
return m_Data;
}
}

This code so far is all fine, however, in our data header we don't want the default constructor, so we're going to just delete the default constructor, like this:

#ifndef DATA_HEADER
#define DATA_HEADER

#include <string>

namespace Xelous
{
class Data
{
private:
std::string m_Data;
public:
Data() = delete;
Data(const std::string& p_Data);
const std::string& GetData() const;
};

}

#endif

Fairly simple stuff so far, and no problems, no errors... However, I always build with "pedantic", especially as I get closer to release and I'm looking at code, so lets see my build file for the above project:

CC=g++
STD=c++14
WARNINGS=-Wall -Wfatal-errors
PEDANTIC=-pedantic
OUTPUT=example

MAIN=main
DATA=data

CompileMain: ${MAIN}.cpp
${CC} -std=${STD} ${WARNINGS} ${PEDANTIC} -c ${MAIN}.cpp -time
CompileData: ${DATA}.cpp
${CC} -std=${STD} ${WARNINGS} ${PEDANTIC} -c ${DATA}.cpp -time
Lets just stop there, and look what we have in the build file so far, we have the compiler, the STL version to use, the warnings, we're using pedantic and I'm only compiling the two files, there's no linking going on.

I always prefer this so I can trap individual compile errors or problems, speeding up the over all development... Once I'm happy both classes or files are clear I can then combine them into their link, by adding:

Link: ${DATA}.o ${MAIN}.o
${CC} -o ${OUTPUT} ${DATA}.o ${MAIN}.o -time
This will link everything up...

There isn't a problem with this code, we can save the file now and call "make" on those three targets and it will work fine... Let us just complete the make file:

clean:
rm -f ${DATA}.o
rm -f ${MAIN}.o
rm -f ${OUTPUT}
clearscreen:
clear
all: clean clearscreen CompileData CompileMain Link

These last three targets are our clear, a simple clear screen and then the "all" target, 

So, we can now "make all" and see this output:


Building one after the other is fine, however, if I use "j2" that is to make it build data and main on different cores at the same time, then link the result of both you get the above error:


Both compiles are being sent to the compiler with c++14, when we're using serial single focus compilation there is no warning, yet with two the warnings pops out?

I've actually run out of time on this very busy Sunday to look into this any further at the moment, I'm therefore going to schedule this for tomorrow and let it loose on the world...

Friday, 28 October 2016

Administrator : Using Python to Serve Files (HTTP)

The second in my mini-series of how to share storage between machines, easily, we're going to look at using Python as a Simple HTTP Server...

Linux
On Linux, with at least Python version 2.15.x (use "python --version" to check) you can simply run:

python -m SimpleHTTPServer

And the current folder will be served up on the primary ethernet controller on port 8080.

This is extremely useful to let some remote machine pull files quickly off of a system, and it's a very good technique to remember when you're developing and deploying, because you can just host your "/bin/debug" or "/bin/release" directory to the remote system, and when your builds complete that remote side can pull the new files or images over.

To do the fulling on Linux, I prefer to use wget, so lets assume the above folder is "/home/xelous/share" inside it is a file: "hello.txt", and the IP is 123.0.0.1, this is the wget from the remote machine:

wget http://123.0.0.1:8080/hello.txt

And voila, the file is whisked as a HTTP download across to the remote machine's current folder.

You can write scripts to pull lots of files over and then do builds, use a makefile and you can do builds from your code quickly as you carry on working, this is very useful in my set up as I have an 8 core laptop I can use to kick builds off on, whilst my local workstation can carry on doing another build.  When you're producing ARM kernel builds for two different platforms at the same time molding this simple server and wget to your whim streamlines your development speed so so much!

Windows
On windows you have to have a command prompt with the path to python set, lets assume our python is installed in "C:\Python":

PATH=%PATH%;C:\Python

Then start the server from the "web" folder:

cd \web
python -m http.server 8080

This does exactly the same as the linux version, except now we're hosted on Windows, and sharing the "C:\Web" folder on our server.

Browser
You can browse straight to both of these servers and just see all the files & folders too, simply browse to: http://123.0.0.1:8080/

Why does this exist?
I had a Windows machine which was on a "secure" network, and on that machine I needed to pull a lot of files over to a Linux workstation, I had no rights to create a network share on the Windows machine, and I didn't want to copy everything off onto USB or over the network; because I'd have been creating ghostly copies of all the files on those remote and movable storage intermediaries.

So for security and integrity I wanted to get the files as straight from A to B as possible.

The Windows machine had Python installed, so opening a command prompt, I found the python exe in "/users/myself/AppData/Local/Programs/Python", so set the Path as above, then  moved to the root of the system and started the server.

On the Linux machine I had a simple Python script which called the server "index.html", which was just the file & folder list and then this python script crawled the downloaded index and called "wget" on each file, or "mkdir" for every folder... And I re-cursed down the tree...

My next post will be that very script... Because I am nice like that!

Security Lesson
To any system administrators out there... This is a loop hole on ALL machines running python, take a look if you need to stop this happening!

Thursday, 27 October 2016

Administrator : Linux Network File System (NFS) Mounted Drives

Over the next few days I'm planning to bring you at least three videos about sharing files between different systems, specifically Windows and Linux... Today the easiest (at least for me) Linux to Linux sharing.

For this you will need SSH access and a user account on the remote system, and sudo (root) rights to both machines.  I'm running Ubuntu machines here, both for the client and the server, which variant (32/64) makes no difference.

The Server
sudo apt-get update
sudo apt-get install nfs-common nfs-kernel-server

We need the nfs-kernel-server here, and it will run as a service, once it's all installed we need to make a folder, I create them like this, making it owned by myself:

sudo mkdir /media/xelous
sudo chown xelous /media/xelous

Then I edit:

sudo nano /etc/exports

And I add to it:

/media/xelous     150.0.8.*(rw,no_root_squash,async)

This is the local folder we're mounting, and we're making it available to ALL the machines on the "150.0.8.1 to 150.0.8.255" range of IP addresses.

Saving this file, I then need to restart the whole machine, or just the service:

sudo /etc/init.d/nfs-kernel-server restart

You can then run:

showmount -e

To see the local mount you've just created, if you have an issue, and it doesn't show up, check the above again... because it does work, honest.... The most common problem is permissions on the folder you've created, sometimes on systems you are not the administrator on, it's best to share a folder from your /home directory.

The Client
The client is a simpler installation:

sudo apt-get update
sudo apt-get install nfs-common

Then you can check the remote mount, lets assume the server is on IP 150.0.8.40:

showmount -e 150.0.8.40

You should see the remote mount you created on the remote machine:

Lets create a folder locally, into which we'll mount the remote folder:

sudo mkdir -p /media/remote
sudo chown xelous /media/remote

Now, I happen to be the user "xelous" on both machines, but change your username for the local or remote machines... Mine is not best practice here, as they just have different passwords....

To mount the remote folder locally:

sudo mount 150.0.8.40:/media/xelous /media/remote

So, this is mounting the remote to the local, on the local machine I can then just hop into that folder and work, knowing all the files are trickling out over the network and into that remote machine.

This is very useful if you're going to run a thin client system, or are working on a machine with no, or read-only, local storage.

Why does this exist?
The driver behind this was my main development machine running out of disk space, and my not being allowed to install a new drive... yes, go figure (don't worry, I have asked the fair fellows of IT for access to my BIOS again - Yes, I'm still on a machine with a BIOS not UEFI, don't laugh).

So, with my workstation critically low on disk space, where was I going to put everything?... Well, on another Linux machine I have on the network of course, a big fat server with a slow CPU but oodles of storage.

Tuesday, 25 October 2016

Companies : Don't Rush & Ruin your Software!

Why do some companies do software backwards?  I'm not going to be talking about my employer, this isn't a comment about the work I do for them, it's a comment about a company which supplies us, and whom has provided subsystems for us.  Building blocks which we want to stick together into a product... A little like putting together a PC at home, you build the machine, but Intel make the CPU... I hope you're getting me so far...

The problem I have is the increasing number of vendors whom seem to see Software as either, at best, an after thought, or at worst, an evil necessity.

The software in your systems today are the glue which hold everything together, they coordinate the physical to the component level, if you make a great button to turn on a great machine that's fine; if it's mechanical; but if that button is driving a piece of software, a trigger, a service or just a PIC for heavens sake test it, think about it, write the software, try it yourself!

A great chef should never ever deliver a dish to a table before they have tasted it!  So you as a software engineer, as a provider of components, as a system integrator should taste test your own software!

The number of absolutely abysmal software packages backing up otherwise very good products is ever increasing, and it's not acceptable, either as a third party receiving such devices to re-package, integrate & push upstream, or as a consumer spending their hard earned cash on items which then go on to not work.

I review very many things, and many of them are let down by Software, and it's not acceptable; get quality Software Engineers into do your code, don't just pay the tea boy to bash up a script over a weekend!

And if you're not sure about your software offering, you think it might need work, post it clearly and neatly on github (or wherever) for the customer to take a look, don't obfuscate things, don't hide behind great massive Red Wood Tree style tall stacks of build tools.  Because as much as I like Docker & Yocto & CMake & Make & Gradle and all the others I've used right down to DOS Batch files or Bash Shell scripts, if they don't work for the customer, if they need their machine setting up a certain way, with a certain set of libraries DO NOT blame the customer when they turn around and reject your product because your documentation is utterly lacking in depth or accuracy!

Sunday, 16 October 2016

Software Engineering : RegEx in C++ 11/14 with STL

I want to show you how the STL regular expressions in C++ work.... Note, complete source code & makefile at the bottom.

First we'll need a make file, I'm using Ubuntu Linux with GNU G++ v5.4.0, opening a terminal I get a text editor up and we create this makefile:

CC=g++
STD=c++14
WARNINGS=-Wall -Wfatal-errors
FLAGS=-pedantic
OUTPUT=application
FILES=main.cpp

all:
$(CC) -std=$(STD) $(WARNINGS) $(FLAGS) $(FILES) -o $(OUTPUT)
clean:
rm $(OUTPUT)
And I save that as "makefile"... Next we need a simple "main.cpp" file to test this with, so go a head and write:

#include <iostream>
#include <string>

int main ()
{
std::cout << "Hello World" << std::endl;
}

Save that and we can be in the folder and simply type "make".  Everything should complete cleanly, and you can then type "./application" to run the resulting "Hello World" application.

Now lets go back into the main.cpp, and we'll write a function... I'm not going to show you this as proper C++, I am not going to teach you about classes, so just go with me, we'll write this function above the "int main()" we just created, and then we'll define a function to split a string into words whenever it finds a space....

#include <regex>

std::vector<std::string> SplitString (const std::string& p_Source)
{
std::vector<std::string> l_result;
// The actual regular expression
std::regex l_regularExpression ("(\\S+)");
// Process the whole source string through the filter
auto l_regularExpressionResult = std::sregex_iterator(
p_Source.begin(),
p_Source.end(),
l_regularExpression);
// Use the result iterator to get all the individual strings
// into the result vector of strings
for (auto i = l_regularExpressionResult;
i != std::sregex_iterator();
++i)
{
auto l_item = (*i);
std::string l_TheString = l_item.str();
l_result.push_back(l_TheString);
}
// Return the result
return l_result;
}

Lets just take a look at this working, into your main and do this:

int main ()
{
const std::string l_SourceString ("Mary Had a Little Lamb");
std::vector<std::string> l_words = SplitString(l_SourceString);
for (auto i = l_words.cbegin();
i != l_words.cend();
++i)
{
std::cout << (*i) << std::endl;
}
}

We can save, exit and build the program again, running it we see this:

Mary
Had
a
Little
Lamb

So what did our new "SplitString" function do?  Well, lets first of all hope you're comfortable, with STL iterators, because we use one to go through the source string and then another to go through the expression result.

Our important lines of code are, std::regex l_regularExpression ("(\\S+)");  where we define the regular expression string, no I'm not going to teach you all the ins and outs of creating those strings, this expression however just gets individual strings.

The next important line is: auto l_regularExpressionResult = std::sregex_iterator(  where we are going to use the sregex_iterator constructor to actually apply the filter we created on the previous line, and we apply it to the span of the whole source string "begin()" to "end()" on the std::string::iterator there.

We could try to use the std::string::const_iterator too, by simply substituting with "cbegin()" and "cend()".

The final parameter is passing the actual filtering regular expression into place.

The result, and we don't need to worry about the type as we're leveraging auto there, is a copy of the iterator.  Depending on the STL implementation you have will define when the processing takes place, some versions will process as you iterate over the sregex_iterator, making you process the input on the fly, whilst others pre-process everything, holding off your code moving to the next line of code (when you step through) until the complete source has been processed through the regular expression.  This can be a performance trap for some, as they either think it will process, when it does not, or it does not until you iterate, and confusion ensues.  Especially when you are writing cross platform code and the platforms express different behaviours.

The last important piece of code is actually going through the result to see if there is anything in the resulting iterator.

The awkward piece of using auto shows up here, because on some platforms when you try to iterate through the result and get each string you might want to do "(*i).str()" rather than assigning the dereference (*i) to an auto first.  However, some compilers (especially when using -pedantic, GCC on this one) don't like this, so to make the code more maintainable and pre-empt it being on any platform where the dereference of the iterator is reported to "not contain a definition for "str()", I simply assign the dereference to an auto called "l_item" and then use "l_item.str()"... That's a lesson in maintainable code right there folks.

That is a very basic introduction to regular expressions, you can see why I have gone through this below.

Right now through, lets use a more complex regualr expresion, and avoid the complexity of the interator stuff, lets just validate a string as a UK Postcode:

const bool ValidatePostcode (const std::string& p_UKPostcode)
{
std::regex l_Validate ("^([A-PR-UWYZ0-9][A-HK-Y0-9][AEHMNPRTVXY0-9]?[ABEHMNPRVWXY0-9]? {1,2}[0-9][ABD-HJLN-UW-Z]{2}|GIR 0AA)$");

return std::regex_match(p_UKPostcode.c_str(), l_Validate);
}

This might look a little cramped, but I never wanted to make a mistake with the reg-ex.  This isn't a perfect solution btw, I'm still writing a test routine to check it against a full list of UK postcodes online, I think it will let some stranger codes through as valid, but they are edge cases, this will work for 99.5% of addresses, and 100% of those I've tested so far.

There you go, good luck!


=== WHY DOES THIS EXISTS ===
Today I've been using regular expressions in C++, some might consider this dark magic, however, I assure you it is all above board, the problem was the validation of a UK postcode, there was a quite terrible function:

bool Validate(char *Code);

Defined, which had all manner of hackery and trouble within, not least it could not handle some London Postcodes, we'll come back to postcodes later, however I replaced all the functionality of this code with two lines of code... Literally two, it went from around 500 lines of un-maintainable junk, to the two lines of active code to manage which you see above, I in fact could have placed the regular expression string into our master list of "strings" to yet further minimise where constants are defined, but I left that to him, left him a small victory to coerce acceptance of my drastically demonstrating his not thinking about the code changes needed, and spending all week on something which took me two lines and about 10 minutes to make sure the regex was right!

Handing it back to the owner, after my peer review, I think they wanted to cry, instead they rushed off to our common Director, avoiding all code managerial level input from fellow programmers, and said I had "shown them up by using a third party library".

I had used STL, something we use elsewhere, I had also followed the coding standards which exist, so the function had become:

const bool ValidatePostcode(const std::string& p_UKPostcode) const;

This, I think you must agree, is more informative as to what it does, it tells us we can't edit the values, we're still passing everything by reference but we're not changing the type of our system string handing from "std::string" to "char*" and we also define that the function changes nothing in the class it is within with a trailing const.

All these rules are in the coding standard, folks before you go around a peer to complain; a more senior peer at that; please check you are in fact on the right track.

So, having validated my changing the function prototype, I had to explain why I had used a third party library (as all such libraries need formal evaluation)... "Regular Expressions are in the standard library".... Was my simply reply... "Only in the latest technical release!"... Was the mouth frothing reply from the hurt chap.  "No, they've in C++11, we use STL all over the code, it is formally evaluated and signed off by everyone, including yourself".

The guy looked extremely crest fallen, and whatever his motivations for having a go at myself, I realised he just didn't know, he'd not read the books I had, he's not used the code as I have, and he'd simply always used regular expressions from third party sources, and that's fine, but please folks just check  your coding standard and have at least a look on google, before you go shouting to those above in an unprofessional manner.


---- THE COMPLETE SOURCE (main.cpp) ----

#include <iostream>
#include <string>
#include <regex>

std::vector<std::string> SplitString (const std::string& p_Source)
{
std::vector<std::string> l_result;
// The actual regular expression
std::regex l_regularExpression ("(\\S+)");
// Process the whole source string through the filter
auto l_regularExpressionResult = std::sregex_iterator(
p_Source.begin(),
p_Source.end(),
l_regularExpression);
// Use the result iterator to get all the individual strings
// into the result vector of strings
for (auto i = l_regularExpressionResult;
i != std::sregex_iterator();
++i)
{
auto l_item = (*i);
std::string l_TheString = l_item.str();
l_result.push_back(l_TheString);
}
// Return the result
return l_result;
}

const bool ValidatePostcode (const std::string& p_UKPostcode)
{
std::regex l_Validate ("^([A-PR-UWYZ0-9][A-HK-Y0-9][AEHMNPRTVXY0-9]?[ABEHMNPRVWXY0-9]? {1,2}[0-9][ABD-HJLN-UW-Z]{2}|GIR 0AA)$");

return std::regex_match(p_UKPostcode.c_str(), l_Validate);
}

int main ()
{
const std::string l_SourceString ("Mary Had a Little Lamb");
std::vector<std::string> l_words = SplitString(l_SourceString);
for (auto i = l_words.cbegin();
i != l_words.cend();
++i)
{
std::cout << (*i) << std::endl;
}

// Postcodes
std::cout << "--- Postcodes ---" << std::endl;
std::cout << ValidatePostcode("NG16 5BP") << std::endl;
std::cout << ValidatePostcode("NG10 1NQ") << std::endl;
std::cout << ValidatePostcode("Robert") << std::endl;
std::cout << ValidatePostcode("FP52 JTY") << std::endl;
}

---- makefile ----

CC=g++
STD=c++14
WARNINGS=-Wall -Wfatal-errors
FLAGS=-pedantic
OUTPUT=application
FILES=main.cpp

all:
$(CC) -std=$(STD) $(WARNINGS) $(FLAGS) $(FILES) -o $(OUTPUT)
clean:
rm $(OUTPUT)


P.S. Yes this will all work with "STD=c++11" in the make file!

Saturday, 15 October 2016

2016 "Scarey Clowns" the Truth

The truth is they're all copying people who did this in 2014... See this from a story just up the road from where I live!



It's the balloons... They creepy!

But more worrying is, people called the police?... 


Wednesday, 12 October 2016

Embedded GPU, EULA & DRM Issues

At work this week, I've had an embedded Linux single board computer come across my desk, I've had to evaluate it.  My brief appraisal came down to "great CPU & memory performance, crippled GPU performance".

Which was strange, as it was touted to be a pretty decent GPU, and indeed it is, and I could use it to perform OpenCL processing, and even got the new boost compute stuff working on it in C++.

But, it would not display anything in hardware, all the rendering of OpenGLES went through the software mesa driver, removing the mesa driver got me a blank screen.  I therefore set about understanding this issue, because the vendor of the board supplied the image I was using, and another of their demo images had all sorts of fancy graphics, spinning tea-pots galore!

So, what was the difference?  A graphics driver!  The locked down image with all the demo's appeared to have a graphics driver, whilst the released end-user images had none.  Annoyingly the vendor also had no working build recipes for their system; they had recipes, they just would not build.

I therefore approached the GPU side directly, looking at other boards with the same chip set and support packages, I found one, and it became obvious why these end-user images had no driver, you had to sign up to an agreement, as to how you were going to employ the GPU.  If you were going to use the board in a product which could access or use DRM controlled material, it was up to the persons/entity accepting the GPU driver agreement to ensure everything was above board.

The vendor of the linux board supplied no driver, not signed, so they would never be liable if your use of their product ripped someone else of.  Clever of them.

However, it left me with the issue of "shall I accept this EULA on behalf of my employer".  After asking around the general consensus of responses fell into two firm camps.  Either "I don't know" or "Just accept it anyway".

This latter attitude is, well frankly it's dangerous, if you don't know what you're signing up for, don't sign up for it!

Happily, I now know what the EULA is asking me to sign up for, I know the product is not going to be using any DRM signed content, and therefore I can continue, but in my opinion it's worth always checking the small print!