Creating a 24TB Storage server for $5,000

Background

So one of the recent tasks given to me was to create a “JBOD” Server for various uses.  I had to price it around $5k, be at least 20TB usable, and it had to be flexible, yet fast.  So i started doing some research into the subject.  What i came up with was that there was no way we could purchase a “pre-built” solution, of that size, for the price point we wanted.  The only remaining option was to build it myself using commodity hardware.

Parts

I had orginally spec’d out this server with much nicer hardware, or at least slightly nicer, however due to corporate policy, i had to use CDW….yay.  So some items are not ideal, such as the Motherboard and Power Supply, i happened to like a Gigabyte board and a Corsair PSU.  I will not list prices, as i’m sure by the time anybody reads this, they won’t be relevant.

ASUS M4A785-M  Motherboard,
AMD Phenom II X4 3.0ghz Quad-Core Processor,
Crucial 4gb DDR2 Kit, (Later i added an additional pair of 1gb RAM sticks i had laying around),
Antec 1000W Quattro Power Supply,
NORCO RPC-4020 4U Rackmount Server Case,
Areca ARC-1280ML PCIe SATA II RAID Controller Card,
Seagate Barracude 7.2kRPM 1.5TB 32MB Cache SATA HDD,  (20 of them)
A few Molex “Y” cables and extenders.
Any NICs you would like to add.
1GB USB Key (For the OpenFiler Install)

Operating System

The first decision was what operating system to run on this server.  After weighing my decisions it came down to “FreeNAS”, “OpenFiler”, “OpenSolaris w/ ZFS” or “Native Linux”.  Since there were time constraints on this project i did not have the time to really learn “OpenSolaris”.  I also thought that a full blown linux kernel would be a good choice, but may not provide the performance i needed, and that a “storage optimized” specialized kernel is the better route.  I ended up choosing “OpenFiler” since the community-at-large seemed to feel that OpenFiler performed better and had much better support for iSCSI which is something i wanted to test.  I also decided that i wanted to maximize space on my RAID array so i wanted to run the OpenFiler OS from a separate disk.  The original plan was to use a space 2.5″ HDD i had lying around, however the drive turned out to be no good.  So i opted to use a 2gb USB stick i had.  The OS does not have any read/write intensive tasks once it is loaded so USB 2.0 speeds should not be an issue.  The only time this may be an issue is if a lot of swapping is happening, and then there are other problems that need to be addressed first.

Installation and Configuration

The first step was to assemble the chassis.  The NORCO case came pretty much ready to go, i am actually quite impressed with this case.  It came with all the drive trays, the backplane for the drives, all the case fans and more then enough hardware for any installation.  The drives all came in OEM packaging, they were then secured into the hot-swap trays.  The motherboard, memory, CPU and Power Supply were all installed in the case.  The next thing to go in was the Areca controller.  This unfortunately took up my 16x PCI-E slot, as it requires an 8x PCI-E slot.  Thankfully many motherboards come with on-board video.  (Make sure the one you choose does, no sense eating up a PCI Slot for a video card)

The next item was to run the Molex Power cables.  This case powers the drives off the backplane, so you need to plug two molex plugs per backplane “strip”.  If you have large hands stop right now and enlist the help of somebody who is small-handed, as this is a really tight space.  You may have to use some Y-Cables and/or extenders depending on your supply and how many Molex ends it has.  (Note: Make sure that you try and balance out the Power Rails on your supply, so one with multiple rails as apposed to one large one seems to work better.)

Next is the SATA cables.  Theses should be supplied with the Areca Controller.  While this is no where near as bad as the Molex cables, this still isn’t fun.  The controller i used had Multi-lane SATA to 4-port SATA as well as a connecter that would normally light up little LED’s for Power/Activity lights. This extra connector isn’t necessary with the NORCO case, as the backplane actually handles the blinky lights.

At this point i would test out the system, make sure it at least POSTS’ as well as detects all the hardware correctly.  Next i had to create the RAID Arrays.  Initially the design called for 3 RAID-5 Arrays consisting of 7,7, & 6 disks.  (Note: With this array if you can choose to do the initialization now or in the background.  Now took well over a weekend to perform due to the size of the server.)

After that i inserted the USB stick and the OpenFiler Install CD.  Boot to the CD and follow all the prompts to install the OS.  Make sure to install it to the USB stick and not any of the Arrays.  From there follow the OpenFiler documentation, if you paid for it, or use the various forums or other methods to configure it.  It is fairly straight-forward and pretty easy to use.

Performance Results

After everything was configured and setup i decided to do some simple tests to determine if the array was performing well and that things worked as they should.  Since OpenFiler runs a variant of Linux (rPath), i decided a simple dd read/write test would at least give me some ballpark numbers.

I did two tests on the local commandline,   “dd if=/dev/zero of=file bs=1024k count=20000” & “dd if=file of=/dev/null bs=1024k count=20000”.  This gave me sequential reads and writes totaling around 20gb, which made sure i was not just working in cache.

The tests gave me read speeds of anywhere from 300-500 MB/s, and write speads from 200-400 MB/s, yes thats Bytes.  I was quite impressed, especially since it was only running the test against a single RAID group, which is max of 7 disks.

The JBOD has been running as primary storage for my ESX lab.  It is hosting 12 ESX Servers and running around 80-90 VMs, over iSCSI.  It is now starting to see bad performance numbers, while the disk is being utilized fairly heavy

Unfortunately, since this is a Network-Disk box the major limitation has been the gigabit ports out of the back.

Lessons Learned

If i had to do this project over with what i know now, i would have done a few things differently.

1. Not settled on hardware and tried harder to get what i originally spec’d out.

This would have made my life much easier as i found out that this ASUS MB didn’t have the necessary PCI-E slots to put the quad-port GIG-E nics i wanted, as well as the iSCSI and FC HBA’s.

2. I would have going with OpenSolaris and ZFS or some other high performing FileSystem.

OpenFiler works quite well, but it has had some odd quirks, most notibly with NFS and it dropping the connections to ESX and not allowing them back in.

3. I would not have used iSCSI for the primary ESX Storage.

In all of our testing it was seen that NFS VMFS volumes out performed iSCSI VMFS volumes.

4. 10gb

1gb pipes just weren’t fast enough for what we were trying to do.

Leave a Reply

Your email address will not be published. Required fields are marked *