Showing posts with label script. Show all posts
Showing posts with label script. Show all posts

Friday, 10 November 2017

Virgin Media : Poor Internet Speed Misery

You know that moment in Misery where Annie (played by the excellent Kathy Bates) raises the lump hammer to Paul (James Caan's) ankles?  That hopeless moment, where you know what's coming, and she's determined this is the best, and he's helpless to change things....


Yeah, his feeling at that moment is the same feeling I get whenever I try to solve my service problems with Virgin Media.  I've tried in the phone centre, they either won't talk to me, or deny I'm an account holder - the account being in my wifes name, but I'm a registered up user of the account etc etc.... Or they simply deny there's an issue....

"I can ping you now sir"....

Really, a few ICMP packets get through and you think it's a-okay do you?

Or I get told, reboot your superhub...

Or variously asked "are you on wifi or wired"... It makes no difference when the speed recorded by either is less than 2mbits!!!

And I've just been told in a reply on twitter "If you have been told about an Area Issue were you given an estimate as to when the issue will be resolved"... I've not been told anything about any area issues, nothing, nada, zip.

Therefore I'm still not best pleased, remember I went down from paying through the nose for Vivid200, as I never ever got anywhere near 200mbits/sec ever.  I did record regular speeds of around 34 to 50 mbits, therefore the safer, more cost effective option was to pay for Vivid50.  Simple, see, simple logical option, if they can't meet the expectations of their own service, play the system at it's own game.

However, it seems that in reality, Vivid200 should be labelled Vivid50, and Vivid50 itself should be called Vivid1... Because breaking the 1mbit/sec barrier seems to be too much for it.

I have therefore decided to create something anyone can interpret, a chart... Managers love charts... People can interpret charts....

This chart is going to record the internet speed (recorded from my linux server, on my wired Cat5e directly to my 1gigabit router, which is directly wired to the Virgin Media Superhub 2 set into Modem Mode).  No wifi, no confusion, no bull, and my router has pfSense, so I can see there's no shenanigans, just the pure speed through put.

I'm going to record the speed with "speedtest-cli", which you can see yourself how to install here.

I will collect my results by running a python script, which runs the speed test and outputs the time, the upload and finally the download speed into a CSV file.

Find the source on my github... 

import subprocess
import time
from time import gmtime, strftime

# Open a simple text file for writing the result
resultFile = open("speedtest.txt", "a+")

while True:
# Header text & placeholders for our result
print ("Starting Test...")
timeStr = strftime("%Y-%m-%d %H:%M:%S", gmtime())
downloadSpeed = ""
uploadSpeed = ""

# Action the process to test our speed
# capturing it's output
result = subprocess.run(['speedtest-cli'], stdout=subprocess.PIPE)

# Process the output into text & split the text
# at each new line character
btext = result.stdout
text = btext.decode('ascii')
lines = text.split("\n")

# For each line, check whether it is upload
# or download
for line in lines:
# For Download, take a split against space
# and the middle value is the speed
if line.startswith('Download: '):
speedParts = line.split(" ")
if len(speedParts) == 3:
downloadSpeed = speedParts[1]
# Likewise for upload, the middle  value is
# the tested speed
elif line.startswith('Upload: '):
speedParts = line.split(" ")
if len(speedParts) == 3:
uploadSpeed = speedParts[1]

# Print our output result as a CSV
print (timeStr + "," + downloadSpeed + "," + uploadSpeed)

# Write the result to a file also
resultFile.write (timeStr + "," + downloadSpeed + "," + uploadSpeed + "\r\n")
resultFile.flush()

# Count down until the next test time
count = 10
while count > 0:
# The line is repeated, so we use the end=""
# and a return carriage to print over and over
print ("\rTime until next test " + str(count) + " seconds", end="")
time.sleep(1)
count = count - 1
# Print a new line to stop the next text appending
# on the time count down line
print()

I will then load this CSV file into a spread sheet and create a chart, here's one I created earlier with 5 test data points.


The blue-line is where I'm most concerned, that is my download speed, as you can see within three minutes I had quite a difference, ranging from a high of 2.24 mbit, to a low of 1.23 mbit.  Upload speed has been more consistent giving a measly 3.5 to 4.0 ish.

I already know where VirginMedia will take the conversation, they will talk about "based on average peak time download performance".  However, I want to immediately counter that their speed information states that speeds are based around "Movie based on 4.1GB file size a single user and wired connection", and this chart is provided.... 

Average download speed at peak time (8pm to 10pm) the time's I have mostly messaged to them on twitter are sub 1mbps... Right now at just before 11am they are still reporting as extremely low.  And yes, this server is the ONLY machine on in the house, the wifi is off, the other wire into this hub removed, there is one wire to one machine and one wire to their Superhub...

And yes, I can get 1gbit disk to disk over NFS on this hub, the wires to and from it to the machines are perfect, and I've also swapped the wire to the superhub.

I'm going to run this for a few days, and see what speeds we get in the dead of night, or early mornings, and see if there is a pattern.  I have known for years Virgin will throttle speeds, however, their table of speeds is labelled "Average", one can only believe we're on the lowest ebb of that bell curve, and I am not a happy customer.

Monday, 19 June 2017

Bash : Power of Pipes

Subtitle: "Get IP Address Easily"

When I say easily, I mean not so easily, but with the proper tools... Let me explain, it's been one of those days... I've a remote server running some flavour of Linux, and no-one knows it's remote IP Address, they all SSH into the box run "ifconfig" and note down the value, they then plug this into a config (or worse still were baking it directly into some code) and running their services....

The trouble of course being, years later, they're no-longer the programmers nor maintainers of this machine, I am...

And to be frank whenever the IP address changes I don't want to recompile their java code, nor use vi to edit the various configuration files, I want a script to at least update the settings automatically.

I therefore changed their code to load the IP address, not hard code it, and used some other scripts to put the IP address into the config file at boot...

The first line of that script is what I'm going to document here... so it starts:

#! /bin/bash
ifconfig | grep inet | tr ' ' '\n' | sed -u '/^$/d' | head -2 | tail -1 > ipaddress.txt

This script gives me a single line of text with the IP Address in it, for the one and only adapter in the machine, if you have multiple adapters you'd have to play about with the grep inet to select the row you want with a head & tail call before moving into the  final location, or whatever...

I wrote this up however, and immediately started to use the IP address.

The net result was a request to explain all this functionality to a colleague... Here's what I came up with.

ifconfig gets us the adapter information...
grep strips off lines we don't want only giving lines for the inet adapter
translate turns spaces into new lines
the sed call removes the blank lines, giving just the IP address and some guff
the first adapter IP address is therefore always the second line of this output
we select the first two lines with head
then select only the latter of these two with tail
and write this to a file

Her reply... "What are the Lines?"....

"What lines?"...

"These | things"....

"They're pipes, I'm piping the information from one program to the next..."

"Oh"

"Do you know what pipes do in Unix and Linux?"

"No"

I sent her to this video... https://youtu.be/XvDZLjaCJuw?t=5m15s

Wednesday, 10 May 2017

Development : Python, MySQL and Protocol Buffers Build

Today I've come to a totally virgin installation upon a server, this was for a work group I've got to head up whom are looking at pushing MySQL with Python.  And things initially went wrong...

I stipulated they had to use Python3 and thought everything else would be fine for them to install with Pip3, so...

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python3
sudo apt-get install python3-pip
sudo apt-get install mysql-server

Everything looked fine, their user could install packages through Pip3 and they had Flask and a few other things and were flying.  I used the mysql client on the command like to test I could add a few items and also ran...

sudo mysql_secure_installation

To do the basic hardening of the database, so everything was fine... Right?....RIGHT?!?!

No, not exactly.... "I can't get mysql.connector".... Came the cry.

And they were right, it reported a myriad of build issues and could not install.  I took a look... NIGHTMARE!

It appears the installation of mysql.connector via Pip3 depends upon Protocol Buffers from google for the latest version of mysql.connector... Which the Pip install didn't sort out, at least not easily... Luckily I run a whole gaggle of virtualized machines, so I could quickly spool up a new instance and try a few things out...

This is the script I came up with....

cd ~
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y python3-pip git autoconf automake libtool curl make g++ unzip
sudo pip3 install flask
hostname -f > hostname.txt
sudo apt-get install -y mysql-server
sudo mysql_secure_installation
sudo ldconfig
cd ~
git clone http://github.com/google/protobuf.git
cd protobuf/
./autogen.sh
./configure
make
make check
sudo make install
sudo ldconfig
cd python
sudo python ./setup.py install
cd ~
sudo ldconfig
sudo pip3 install mysql-connector --install-option='--with-protobuf-include-dir=/usr/local/include/google/protobuf' --install-option='--with-protobuf-lib-dir=/usr/local/lib' --install-option='--with-protoc=protoc'

Lets go through this step by step...

cd ~
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install -y python3-pip git autoconf automake libtool curl make g++ unzip

This is just the basics, so our installation will now depend upon git, autoconf, automake, libtool, curl, make, g++ and unzip, most all your standard distributions will contain packages for these, we use -y just to skip any permission requests.

sudo pip3 install flask
hostname -f > hostname.txt

The next step is simply a couple of items for our project, we're going to use Flask to provide a restful interface, and the "hostname.txt" is simply to remove our need to call "hostname" again later.

sudo apt-get install -y mysql-server
sudo mysql_secure_installation

Our next step is to install and secure the MySQL service.

sudo ldconfig
cd ~

Generic code now, to simply reload the library list and change to the home directory.

git clone http://github.com/google/protobuf.git
cd protobuf/
./autogen.sh
./configure
make
make check
sudo make install

This is the build of protocol buffers from google, so we pull it from their github repo, we move into that folder, prepare and configure the build, then make the whole build.  By far this is the LOOOONGEST instruction, on a single core 1GB equipped virtual instance this took around 45 minutes.

Once complete we simply need to reload the libraries again...

sudo ldconfig

However, protocol buffers are still not installed within Python, so we are still in the "~/protobuf" folder, we now need to go deeper, into the python folder and perform the setup installation under python...

cd python
sudo python ./setup.py install

When complete we again need to reload the libraries...

sudo ldconfig

And the final, secret sauce, is to actually install mysql connector through pip3 with protocol buffers...

sudo pip3 install mysql-connector --install-option='--with-protobuf-include-dir=/usr/local/include/google/protobuf' --install-option='--with-protobuf-lib-dir=/usr/local/lib' --install-option='--with-protoc=protoc'

This is a single command, spanning one single line.

And voila, once complete you get to use mysql.connector in your python code...

import mysql.connector
import gc

con = mysql.connector.connect (user='whatever', password='something', host='localhost', database='yeahyeah')

con.close()
con = None

gc.collect()

You can find out more about why I nullify and garbage collect a connection in my previous post.

Thursday, 27 April 2017

Server Admin : Ubuntu 17.04 thinks it's Ubuntu 12.04???

Yeah, I'm serious, I've taken time tonight to look at the release of Ubuntu Server 17.04, specifically to set up a new mini-server which is to be Core 2 Duo powered and on 24/7 as boot strapper & service strapping server itself.

But, before I run I like to walk, so I set up a 2 core, 1GB RAM VMware machine from the 17.04 ISO... Take a look at the first thing it has presented to me....


Yes, just read that again... I booted the server... and the only action I took was to log in... Welcome to 17.04... All good...

What wait?.. Why am I being warned to upgrade my 12.04?  This is 17.04?

Before I ran around like my last vestiges of hair were on fire, I decided to do a simple test, I've previously found that Ubuntu often goes wondering off on the internet for message of the day (motd) information, so I pulled the network card (virtual) out of the machine.

This results in a long boot time, but you at least know no remote files or services are going to be listing things on your screen...


Five minutes later, I get to see what the system says... From experience I think Canonical have shown some news and up to 80 characters of 40 lines of news, I've never seen that much, but it has been a while since I looked at their motd scripts.

After logging in, I still got the message, however, a fresh install (without any network) didn't show the message, so I believe the install we see here cached something from an online source.

Anyway, taking a look in the /etc/update-motd.d folder, you can see a series of numbered scripts, these are so numbered to allow Canonical, or yourself, to add message of the day scripts, and keep them in the order you see them.


Checking "00-header" we see just the usual log in.

Then "10-help-text" is the three lines about documents, management and support.  I actually add "#" to the start of each of those lines to remove that files actions, I don't delete the file though, just in case.

The next line "50-motd-news", this looks to be the culprit... I'm not even going to look inside the file, because I can see the next file in the folder is "90-updates-available" and I can see in the login that the updates available happen after the message I want to be rid of....

So this strange, confusing, message is in "50-motd-news", I'm going to cut to the chase and kill that file.

And now my login is much neater, I have added a call to "ifconfig" into the 10-help-text, but my login is now clean of this strange message.  But I'm not impressed this has gone on, and I'm going to have to take a look through all these other motd scripts to see what and where my server is going off to.... Hmmm.

Wednesday, 5 April 2017

Development : Anti-Hungarian Notation

Whilst cutting code I employ a coding style, which I enforce, whereby I output the scope of the variable being used with a prefix.

"l_" for Local
"m_" for Member
"c_" for constant
"e_" for enum

And so forth, for static, parameter and a couple of others.  I also allow compounds of these, so a static constant would be:

"sc_"

This is useful in many languages, and imperative in those which are not type strict, such as Python.

Some confuse this with "Hungarian Notation", it's not.  Hungarian notation is the practice of prefixing a type notification to the variable name, for example "an integer called count" might be "iCount".

I have several problems with anyone using Hungarian Notation, and argue against it thus. With modern code completion and IDE lookup tools this is really not needed, with useful and meaningful naming of your variables the type is not needed and finally there are multiple types with the same possible meaning... i.e. "bool", "BYTE" and "std::bitset" are they all 'b'?  What about signing notation, so you compound "unsigned long" as "ul" to the name?

It all gets rather messy, a good name is enough.

However, the scope of the variable might change, the scope might not be enforced, and in none strict languages you might have a variable go out of scope and then automatically re-create the value with a blank value, if you don't follow your scopes.

Therefore I can justify my usage and enforcement of this coding standard.

What I can't stand however is when someone listens to my explaining this, they read my coding standards document, they even go as far as having me reject their code during peer review for these reasons, and then they dismiss my comment with the "it's just Hungarian Notation"... Scope is not type, and type does not define scope, don't be fooled!

Friday, 31 March 2017

Linux Server Admin : Bash Kill Processes By Common Name

On my Linux server I've recently wanted to go through and kill a bunch of application instances in one go, this is a server where students have been connecting and running carious programs under python, therefore I want to remove from my processes anything called "python".

We can see these in our bash shell with the command:

sudo ps -aux | grep python

To remove all these programs I create the following bash shell script:

k = 0
for i in $(ps -aux | grep python)
do
  k=`expr $k + 1`
  kill -9 $i  
done
logger -s "Closed $k Python Instances"

Notice k=`exp... this is NOT a single quote (apostrophe) it is the "smart quote" on a UK English keyboard this is the key to the left of the number 1.  It is used to substitute the command into place, so the value counted in K becomes the result of the expression "$k + 1", i.e. K+1.  More about Command Substitution in Bash here.

The call to logger -s places the message both on screen and in syslog for me to review later.

This simply loops through all the applications resident and kills them off, I've saved this as a "sh" file, added executable rights with "sudo chmod +x ./killpythons.sh" and I created this to run as a cron job everyday at 3am (a pretty safe time, unless I have some students burning the candle at both ends).

That's everything about the bash script, for those of you wondering about the students, they're those folks following my learning examples from my book, which you can buy here.


Thursday, 2 February 2017

Development : Phone Link

I wonder around all day with an Android phone in my pocket, however, I can't always answer it... "No Problem" I can hear you cry "leave it to go to answer phone".

However I purposefully have no answer phone, if you can't get a hold of me, you can't get a hold of me, but what I would like is a method of receiving a message which I can digest in my own time, not a voice mail, not a text message, I want to be able to have left my phone at home and still get the information from it that I've had a call....

I'm a developer... I can do this right?!!??!

Well, yes and no, first of all I started with a Java Application which would read the missed calls log and forward it to one of my servers, this worked, but didn't let the person at the other end know I had been made aware of the missed call, it also logged junk mails and just gave me the number.  If I didn't recognise the number I could be left lost.

So, I set about a but of middleware, I had the missed calls log forward to my server every few time it changed, the server then looked up each number on a white list and a black list.  If found on the white list is looked for them in contacts and sends them an automated message that I had received a missed call and to e-mail me...

Fine for contacts.

Last week however I went a step further, if a number is not black listed it looks them up as a contact then if not found it googles for them and looks scrapes the top ten results.  E-mailing the number and these search results to me.

I've today added a series of regular expressions to pattern match any number or name from the results, and if more then two match it flags it as a contact and googles for them, finding an e-mail address is the target... It will go to my various e-mail addresses and alias, looking for previous missives and will send them a mail that way.

The person has to have had a contact with me, this is not blindly spamming people, and I had had to use a POP3 connection to one of my own email providers as Virgin Media stop SendMail from just working... But, so far it's worked.

Today alone I've had nearly eighteen calls, and it's sent about five messages today, since last night (when I started it) it's send just over twenty in total....

I dub this "Phone Link" for Android, and I may very well push it out to the world if I can polish it up a little and not have it so closely bound to my various e-mail services.  What do you all think?

Monday, 28 September 2015

SDL2.0 Complete Installation on Ubuntu

I've just spent sometime, setting up SDL for development on my Ubuntu machine, and found that the Ubuntu library caches lack SDL v2.0, they have the older 1.2, but that wasn't good enough, so here is a little script I've put together, to get me all the dependencies (least what I think the dependencies are), download each part of SDL (SDL, TTF, Mixer & Image), extract them into folders, build and install them, and then delete the folder copy which was extracted.

cd ~
sudo apt-get install build-essential xorg-dev libudev-dev libts-dev libgl1-mesa-dev libglu1-mesa-dev libasound2-dev libpulse-dev libopenal-dev libogg-dev libvorbis-dev libaudiofile-dev libpng12-dev libfreetype6-dev libusb-dev libdbus-1-dev zlib1g-dev libdirectfb-dev
wget https://www.libsdl.org/release/SDL2-2.0.3.tar.gz
tar -xvzf SDL2-2.0.3.tar.gz
cd SDL2*
./configure
make
sudo make install
cd ~
rm -rf SDL2-2*
wget https://www.libsdl.org/projects/SDL_image/release/SDL2_image-2.0.0.tar.gz
tar -xvzf SDL2_image-2.0.0.tar.gz
cd SDL2_image*
./configure
make
sudo make install
cd ~
rm -rf SDL2_image*
wget https://www.libsdl.org/projects/SDL_ttf/release/SDL2_ttf-2.0.12.tar.gz
tar -xvzf SDL2_ttf-2.0.12.tar.gz
cd SDL2_ttf*
./configure
make all
sudo make install
cd ~
rm -rf SDL2_ttf*
wget https://www.libsdl.org/projects/SDL_mixer/release/SDL2_mixer-2.0.0.tar.gz
tar -xvzf SDL2_mixer-2.0.0.tar.gz
cd SDL2_mixer*
./configure
make all
sudo make install
cd ~
rm -rf SDL2_mixer*
sudo ldconfig

Once this is all done, you need to edit /etc/ld.so.conf adding to it, thus:

sudo nano /etc/ld.so.conf

And add:

include /usr/local/lib

Save the file, I can now create an empty C++ project, or just a main.cpp, and include the following files:

#include <SDL2/SDL.h>
#include <SDL2/SDL_image.h>
#include <SDL2/SDL_ttf.h>
#include <SDL2/SDL_mixer.h>

The libraries to link against are:

libSDL2.a
libSDL2_image.a
libSDL2main.a
libSDL2_mixer.a
libSDL2_test.a
libSDL2_ttf.a

(Each .a also has a static .so, alongside, if you prefer)

This, of course the linker options on the command line are:

g++ --std=c++11 main.cpp -llibSDL2.a -llibSDL2_image.a -llibSDL2main.a -llibSDL2_mixer.a -llibSDL2_test.a -llibSDL2_ttf.a