alex coomans

about

Building a Custom 15TB RAID Storage Server

05 Dec 2012 - Austin, TX

I recently built a custom 15TB RAID storage server to store all of my media files, and now that is has been running reliably for a few months, I figured I'd cover how I built it.

The Hardware

Before this, I stored all of my data across about 5 external hard drives that I've acquired and filled over the years, and I really wanted everything to be in one place. I'd decided on using ZFS so I started there. Thanks to a post I saw on HackerNews I knew some basic requirements, and I also looked into FreeNAS. After a fair amount of research, I figured that the following would be my best bet:

The Case

I've always wanted to build a custom computer case (and while this isn't a full case), this project gave me the perfect opportunity.

I started off by looking for places that would do custom CNC work, and hopefully allowed me to avoid a CAD software (I'd already tried a design and failed). I found eMachineShop and they had a Windows app to help design what you wanted. I booted up my VM and after about a month of designing, printing, and test fitting using the pieces I finally placed my order. The only downside was the price - it ended up costing me a little over $500 for two sets of all the pieces I needed. After about two weeks of waiting I received a UPS tracking number another week later I had them in my hands.

At the same time I placed an order for all of the standoffs & screws I needed for the case from DigiKey. Overall the standoffs have worked great, the only time I've had problems is when a few snapped at the top when I had to ship it (I have yet to order the replacements though), but overall it is pretty strong.

The Software

I'd originally planned to use FreeNAS to power everything, but when I tried to install FreeNAS, there was a bug related to my HBA card, so it wouldn't recognize my five 3TB drives. After a bunch of tries suggested by other people using the card with previous versions, I decided to switch to Ubuntu. This ended up working out perfectly for me since I'm much more familiar with Debian-based systems than I am with FreeBSD ones.

I started off with a server install of 12.04LTS on the SSD. It instantly recognized my storage drives, and I started working on setting up ZFS. The ZFS PPA was really easy to get installed and afterwords I set up the ZFS pool. The following was all I needed to get started with one large storage tank setup in a ZRAID configuration:

$ sudo zpool create tank raidz /dev/disk/by-id/ata-ST3000DM001-serial ...
$ sudo zfs create tank/store

This ZRAID left me with about 10.72TB:

$ sudo zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
tank        3.47T  7.25T  46.3K  /tank
tank/store  3.47T  7.25T  3.47T  /tank/store

Sharing

To get sharing started, all I needed to do was install Netatalk and Avahi, and this article helped me configure everything. In my /etc/netatalk/afpd.conf config file I've got:

- -tcp -noddp -uamlist uams_guest.so,uams_dhx.so,uams_dhx2_passwd.so -nosavepassword -nozeroconf -advertise_ssh

And then to actually share that, I added the following into my /etc/netatalk/AppleVolumes.default

:DEFAULT: options:upriv,usedots
/tank/store Dashr veto:/temp/ options:ro
/tank/store Dash allow:alex options:usedots,upriv

Which gave me two shares - one for my own use (that excludes a temp folder I have some disorganized files in), and then a read-only share without authentication for anyone else to use.

Remote Access

So recently I decided I wanted to be able to access the server remotely (I've got nginx running in front of some other apps I run), and because I'm stuck in an apartment complex that has it's own router, and I have a router for my unit, I was in a bind. Then I realized that I could reverse SSH tunnel from my server to an EC2 micro server I've got running. To set that up, I created a cron script on my server that runs every five minutes, mainly too reconnect if something crashes (credit to this article):

#!/bin/sh
COMMAND="ssh -N -f -R 8080:localhost:80 ec2-user@"
pgrep -f -x "$COMMAND" > /dev/null 2>&1 || $COMMAND

And then on my EC2 server in my nginx configuration file, I added to following:

upstream dash {
    server 127.0.0.1:8080;
}

proxy_cache_path /var/www/cache  levels=1:2    keys_zone=STATIC:10m
                                 inactive=24h  max_size=1g;
server {
    server_name remote.example.com;
    location / {
        auth_basic "Restricted";
        auth_basic_user_file dash.htpasswd;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://dash;
    }
    location ~* ^.+.(jpg|jpeg|gif|png|css|js)$ {
        auth_basic "Restricted";
        auth_basic_user_file dash.htpasswd;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_pass http://dash;
        proxy_cache            STATIC;
        proxy_cache_valid      200  1d;
        proxy_cache_use_stale  error timeout invalid_header updating
                               http_500 http_502 http_503 http_504;
    }
}

I also added in some HTTP caching to speed some of the static assets up so those wouldn't need to also be sent over the tunnel. While not the speediest of options, it has worked reliably for me.

Conclusion

This was a really project to work on; I've wanted to build a custom case for a long time, and even though I spent hours and hours designing the case and quadruple checking that everything fit, I'm really pumped about the final result. Here are a few more pictures of my server today:


Comments

blog comments powered by Disqus
copyright © 2012 alex coomans