Whilst investigating a problem with the Socket 775 to 771 Xeon hack machine I found it would no longer boot, it then would no longer post, and finally would no longer even accept power (indicated on the 3v rail LED)... This was a disaster, I've been sorting out the network at home for the last few weekends, and yesterday morning was meant to be a delve in & fix session (at 830am on a Sunday, this is a feat).
Unfortunately it immediately escalated, the motherboard showed distinct signs of corrosion, which is really strange as it's been in a dry airy room, there looked like (and tasted) like salt condensed on the board... I do wonder if this board had had a life near the coast in former days (it was second hand), and the salt just slowly precipitated out of the fibre glass?
Whatever the reason, there was salt all over the board, I cleaned it all with isopropyl alcohol to no avail, it would not post.
So I stripped it out and went to my junk pile, two other motherboards were already dead, the third... Well I know it works, after a fashion, it's an EVGA nForce 680i SLI board, my previous main workstation board actually... But I retired it for my Core i7, and it had been off with a friend of mine, it has at least one faulty RAM slot too...
Inserting my test Celeron D it came on, and I could run a memory test until it overheated and thermal shutdown occurred... So, I pulled out the Xeon from the hack board and got it into the nForce... Nothing... Dead... But, a BIOS patch later (with a floppy disk!) and everything was working...
So the Xeon went into the EVGA nForce 680i no problem! 4GB of RAM installed in the two working slots, and with new thermal paste I left it soak testing... Everything seems fine...
And this is equivalent (if not better) than the previous board, because I know its history, it's got six SATA headers dual gigabyte LAN... It's actually the perfect little server board, except for the lack of the working memory slots.
A new one of these boards is still like £50, so that was out of the question, I did order a new one from ebay a Gigabyte branded on, which can take up to 16Gb of RAM but only has a single LAN connection, it will have to do.
Until then though, the server is getting re-installed on the EVGA nForce 680i, and I'm going to keep my eyes on ebay for another of these boards to replace the already dead set from my junk pile.
On the topic of drives, I wanted to set up a series of physical mirrors with ZFS, however, I don't have matching drives, so I'm wondering what's the best order to set up the disks...
I feel a little confused as to the best way...
A blog about my rantings, including Games, Game Development, Gaming, Consoles, PC Gaming, Role Playing Games, People, Gaming tips & cheats, Game Programming and a plethora of other stuff.
Showing posts with label small. Show all posts
Showing posts with label small. Show all posts
Monday, 28 November 2016
Tuesday, 15 November 2016
Administrator: ZFS Mirror Setup & NFS Share
I'm going to explain how to use some simple (VMWare emulated) hardware to set up a ZFS Mirror. I'm picking a mirror, so they have 100% duplicates of the data.
I've set up the VM with a 4 core processor and 4GB of RAM, because the potential host for this test setup is a Core 2 Quad (2.4Ghz) with 4GB of DDR2 RAM, and it's perfectly able to run this system quite quickly.
The first storage disk I've added is a single 20GB drive, this is the drive we install Ubuntu Server 16.04 onto.
Then I've returned to add three new virtual disks each of 20GB. These are where our data will reside, lets boot into the system, and install zfs... Our username is "zfs-admin", and we just need to update the system:
sudo apt-get update
sudo apt-get install zfs
Once complete, we can check the status of any pools, and should see nothing... "No pools available"
We can now check which disks we have has hardware in the system (I already know my system installed on /dev/sda).
sudo lshw -C disk
I can see "/dev/sdb", "/dev/sdc" and "/dev/sdd", and I can confirm these are my 20GB disks (showing as 21GB in the screen shot).
The file they have needs about 5GB of space, so our 20GB drives are overkill, but they've just had a data failure, as a consequence they're paranoid, so they now want to mirror their data to make sure they have solid copies of everything rather then waiting on a daily back up...
sudo zpool create -f Tank /dev/sdb
This creates the storage pool on the first disk... And we can see this mounted into the Linux system already!
sudo zpool status
df -h
Next we add our mirror disk, so we have a copy of the pool across two disks... Not as fast as raidz but I'm going with it because if I say "raid" there's going to be "Raid-5", "Raid-6" kind of queries and I'm not going to jump through hoops for these guys, unless they pay me of course (hint hint)!
That's "sudo zpool attach -f Tank /dev/sdb /dev/sdc", which is going to mirror the one disk to the other... As the disks are empty this re-striping of the data is nearly instant, so you don't have to worry about time...
Checking the status and the disks now...
We can see that the pool has not changed size, it's still only 20GB, but we can see /dev/sdb and /dev/sdc are mirrored in the zfs status!
Finally I add their third disk to the mirror, so they have two disks mirroring the pool, which they can detach one from and go take home tonight, leaving two at work... It's a messy solution, but I'm aiming to give them peace of mind.
To detach a drive from the pool, they can do this:
sudo zpool Tank /dev/sdc
And take that disk out and home, in the morning they can add it again and see all the current data get put onto the drive.
So, physical stuff aside they now need nfs to share the "/Tank" mount over the network...
sudo apt-get update
sudo apt-get install nfs-common nfs-kernel-server
sudo nano /etc/exports
And we add the line:
/Tank 150.0.8.255 (rw,no_root_squash,async)
Where the IP address range there is the start of your IP, at home for me this would be 192.168.0.*.
Then you restart nfs with "sudo /etc/init.d/nfs-kernel-server restart", or reboot the machine...
From a remote machine you can now check and use the mount:
Why does this exist?
I think I just won a bet, a friend of mine (hello Marcus) about 10 years ago, I helped him set up a series of cron scripts to perform a dump of a series of folders as a tar.gz file from his main development server to a mounted share on a desktop class machine in his office.
He has just called me in a little bit of a flap, because that development server has gone down, their support had lapsed for it and he can't seem to get any hardware in to replace the machine for a fair while.
All his developers are sat with their hands on their hips asking for disk space, and he says "we have no suitable hardware for this"...
He of course does, the back up machine running the cron jobs is a (for the time) fairly decent Core 2 Quad 6600 (2.4Ghz) with 4GB of RAM. Its running Ubuntu Server (16.04 as he's kept things up to date!)...
Anyway, he has a stack of old 80GB drives on his desk, he doesn't 100% trust them, but the file they have is only going to expand to around 63GB... So he can expand it onto one of them, the problem is he wants to mirror it actively...
Convincing him this Core 2 Quad can do the job is hard, so with him on the phone I ask him to get three of these 80GB drives, they're already wiped, and go to the server... Open the case, and let me ssh into it.
I get connected, and the above post is the result, though I asked him to install just one drive (which came up as /dev/sdg) and then I set that up as the zpool, then I asked him to physically power off and insert the next disk, where I then connected again and added it as a mirror.
In the end he has 5 actual disks, of dubious quality, mirroring this data, he's able to expand the tar.gz back up onto the pool and it's all visible with his developers again.
This took about 15 minutes... It in fact took longer to write this blog post as I created the VM to show you all!
Subscribe to:
Posts (Atom)