New ZFS based NAS and VM Host – part 2

In part one I covered the motivation, requirements and hardware. In this post I will cover software installation and configuration.

Installing and configuring SmartOS

SmartOS differs from many other operating systems, though not FreeNAS, by not requiring installation. You simply copy a disk image to a USB thumb drive and boot from it. SmartOS then creates a RAMDisk, copies itself to the RAMDisk and runs from there. Alternatively you can boot across the network using PXE.

This exposes SmartOS’s primary use case as a data-centre operating system. By not requiring installation, upgrades are quickly deployed by copying a new image to the flash drive and rebooting.

This does have an important side effect however. SmartOS supports the notion of “zones”, first implemented in Solaris. When booted, SmartOS itself runs in the “global” zone. However, because the filesystem is on a RAMdisk, any changes you make do not persist across a reboot. There are ways to get around this so that (e.g.) you can ensure your SSH public key is an authorized_key and you can login without a password; but you are strongly discouraged from installing software in the global zone. More on that later.

Installation

I started by following the instructions in the SmartOS wiki to download the latest SmartOS image and copy it to a 2GB consumer grade USB thumb drive. It’s only 161MB so it didn’t take long.
I booted the server from the thumb drive and, because this was a clean system, I was presented with a wizard that asked for hostname, IP address (or DHCP) and the identities of the drives I wished to use for the “zones” pool; which is used to store all the datasets for the other zones.

Rather than risk getting it all wrong, I chose the first two HGST drives and put them in a mirror. After configuring the zpool, I was presented with the login prompt.

The default login is root/root, so I immediately changed the root password!

zpool status showed

# zpool status
pool: zones
state: ONLINE
scan: resilvered 1.98G in 0h0m with 0 errors on Fri Sep 11 13:22:01 2015
config:
NAME        STATE     READ WRITE CKSUM
zones       ONLINE       0     0     0
  mirror  ONLINE       0     0     0
    c1t0d0  ONLINE       0     0     0
    c1t1d0  ONLINE       0     0     0

After this, I added the other two drives in as a second mirrored vdev using

zpool add zones mirror c1t2do c1t3do

and then added the SSD as an slog

zpool add zones log c1t4do

at the end of this zpool status showed

# zpool status
  pool: zones
 state: ONLINE
  scan: resilvered 4.01G in 0h0m with 0 errors on Fri Sep 11 13:38:15 2015
config:

        NAME       STATE      READ WRITE CKSUM
        zones      ONLINE        0     0     0
          mirror-0 ONLINE        0     0     0
            c1t0d0 ONLINE        0     0     0
            c1t1d0 ONLINE        0     0     0
          mirror-2 ONLINE        0     0     0
            c1t2d0 ONLINE        0     0     0
            c1t3d0 ONLINE        0     0     0
        logs
          c1t4d0   ONLINE        0     0     0

errors: No known data errors

This whole process took about 30 minutes (including downloading and copying the SmartOS image)

Moving data from the old server

Now that I had the new server installed and ready to go, I needed to copy the data across from the old server. There are a number of ways to do this:

  1. Physically move the disks across
  2. Use ZFS Send/ZFS receive to copy the data across the network
  3. Use rsync to send files.

As the old server was still running, I didn’t want to move the disks, but experiment showed it was going to take days to copy the data across my network. So I compromised.

I split the mirror on the old server and moved one disk to the new server. I then imported it:

zpool import rdata

It showed up as a degraded mirror. To get the data across, I did the following:

# zfs create snapshot rdata@export
# zfs create zones/import -o mountpoint=/import
# zfs snapshot -r rdata@export
# for dset in "media public software home"; do zfs send -R rdata/${dset}@export | zfs recv data/${dset}; done

After checking the data was there I cleaned up:

zpool export rdata
poweroff

I’ll keep this as an archive disk and re-use the one in the old server.

Next Steps

The initial use for this server is as a file/print server and as a media server. In the next post, I’ll cover how I did this.

Author: Gareth Howell

IT Professional, radio amateur, scout leader and beer drinker. I enjoy helping small business use their limited IT budgets to best effect

One thought on “New ZFS based NAS and VM Host – part 2”

Comments are closed.