New ZFS based NAS and VM Host – part 3

In part 1 of this series, I covered the requirements and hardware. In part 2 I covered the initial configuration of the new server. In this part I’ll cover setting the server up as a file server.

The main use case for this new server is to be the main file and media server for our home. To achieve this I needed NFS, SMB and AFP access to the imported datasets.

NFS access is available by default in ZFS, but to get SMB and AFP access requires software to be installed. As indicated in part 2, you are strongly discouraged from installing software in the global zone. The supported approach is to create a new zone and install the software in there.
Zones are sort-of virtual machines in that each thinks it has exclusive use to hardware and are separate security containers. Where SmartOS differs from VMWare is its hybrid approach to supporting virtual machines. It does this by supporting multiple “brands” of zones:

  • “joyent” branded zones appear to be running SmartOS itself. They have no kernel of their own, they re-use the global zone’s kernel and just provide resource and security isolation.
  • “lx” branded zones appear to be running a specific version of Linux. As with “joyent” zones, they re-use the global zone’s kernel but translate the “brand”‘s system calls into those supported by SmartOS. This gets you the benefits of the software that normally runs on the brand, but without the overhead of having two run the brand’s kernel on top of the SmartOS kernel. The result is near bare-metal speeds. Currently (Sept 2015), SmartOS supports Ubuntu, Centos, Debian, Fedora (maybe others).
  • “kvm” branded zones are more like any other KVM virtual machine: allowing just about any other operating to be installed.

First attempt using an Ubuntu branded zone

This failed, so I’m not going into the detail

You could install Ubuntu in a kvm branded zone, but using an Ubuntu version of a lx zone avoids running two kernels. Base images for many Ubuntu variants exist in Joyent’s public repository, so I simply followed the instructions in the SmartOS wiki to:

  1. import the base Ubuntu 14.04 LTS server image
  2. Create a json file that describes the new zone
  3. Create the new zone using the json file.
    At the end of this, I had an Ubuntu 14.04 virtual machine called capella, on the same IP address as the old server and with direct access to the ZFS datasets containing the files from the old server.

I now followed the guide at outcoldman.comcoldman to install SAMBA and the guide at … to install Netatalk.
At the end, I had a functioning Samba server, but I had trouble with netatalk. My Macbook Air running Mavericks couldn’t connect to Capella using AFP. Investigation showed that the 14.04 version of lx-ubuntu was missing the security module that was needed to support Mavericks.
Note: SmartOS freely admit that branded zones are still being developed

Second attempt using a native SmartOS zone

Rather than spending too much time on this, I exploited one of the major advantages of using SmartOS. I simply deleted the zone, downloaded a basic joyent brand zone, created a new json file and created a new joyent branded zone. It took 5 minutes! I used the following json

"hostname": "",
"alias": "capella",
"brand": "joyent",
"max_physical_memory": 4096,
"image_uuid": "5c7d0d24-3475-11e5-8e67-27953a8b237e",
"resolvers": ["",""],
"nics": [
"nic_tag": "admin",
"ip": "",
"netmask": "",
"gateway": "",
"primary": "1"
"filesystems": [
"type": "lofs",
"source": "/data/media",
"target": "/import/media"
"type": "lofs",
"source": "/data/home",
"target": "/import/home"
"type": "lofs",
"source": "/data/home/git",
"target": "/import/home/git"
"type": "lofs",
"source": "/data/public",
"target": "/import/public"
"type": "lofs",
"source": "/data/software",
"target": "/import/software"

I then installed Samba and Netatalk as before. This time all was well and I now had a functioning NFS, SMB and AFP file server.

I reconfigured the clients to access the new server and I was back where I was before I changed hardware. Simples!

Next Step, install Plex media server, SABNZBD, CouchPotato and Sickbeard to create a fully functioning media server.

Migrating Amateur Radio to Linux – Day One

This is the second in a short series on my experiences moving my Amateur Radio station from Windows to Linux, and later to Mac OS/X. The first part can be found at Migrating Amateur Radio to Linux, Part One – Requirements

Day One – Installing Linux, CQRLog and FLDigi

I opted for Ubuntu as the distribution. I’ve used many of the others, including the recent Mint, but I know Debian based distributions and I quite like the new UI used by Ubuntu as it’s similar to OS/X.


My target PC is an old Lenovo Thinkpad T60 that has Windows 7 installed on it. As the Thinkpad has no built in serial ports, I use a Belkin USB Serial Adapter, type F5U103. This is obsolete but Ubuntu has drivers built in for this device.

The serial port is connected to the RIGblaster Pro that interfaces the PC to the radios.

Audio is handled using a Plantronics headset USB adapter that I had laying about the shack. Again, Ubuntu has drivers for this.

Installing Ubuntu

I used GParted to shrink the Windows 7 partition to make space for Ubuntu. I created a 32GB root partition and a 2.5GB Swap partition and then installed the 32-bit version of Ubuntu 13.10 ‘Saucy Salamander’ as the T60 doesn’t have 64-bit support.

One advantage of installing this way is that I get full access to all the files on the Windows 7 partition as well.

Basic setup

After updating the operating system I installed my basic necessities:

The only program for which there isn’t a decent Linux client is Evernote. I use Evernote all the time and the absence of a decent Evernote client would be a big problem for me in my business life. It means I could never migrate 100% to Linux. That said, I can use the web client, and for my Ham activities, that’s fine.

Chromium and Dropbox were both installed from the Ubuntu Software Centre.

Installing FLDigi

FLDigi is one of the better known programs for the many digital modes that exist on the radio waves. It is actively developed and there is a well established support community. Installation was a breeze as it’s in the Ubuntu Software Centre.

Installing FLDigi also causes HamLib to be installed. HamLib is a community project to develop an API that can be used to control radios over a variety of physical media and communications protocols. Both my radios support Yaesu’s CAT communications protocol via the Rigblaster Pro and HamLib has an ‘alpha’ driver for the IC-756ProII.

Configuring FLDigi was a bit of a test as I needed to make sure I was using the correct tty device for the Belkin adapter and the correct audio devices for the Plantronics headset adapter. In my case, I opted to use the specific ‘/dev/serial/by-id’ device for the Belkin adapter to avoid a Linux ‘feature’ that can cause a USB device to be mapped to a different tty device if it is re-plugged for any reason. I then told FLDigi to use Hamlib to control the radio using this device.

I set FLDigi to use the PulseAudio system and used the appropriate Settings applet to select the correct audio.

The one area where I did need to play about was getting the Rigblaster to send audio to the radio. Initially, I told FLDigi to put the radio in TX mode using CAT commands, via Hamlib. The radio went into Tx mode OK, but no audio was received by the radio. This was because the Rigblaster needed to be told to switch the audio path from the microphone to the audio in feed from the PC. This needed RTS to be asserted by the PC.

I needed to tell FLDigi not to send ‘PTT via Hamlib’ but to use ‘Hardware PTT’ and a ‘separate serial port PTT’ and use ‘RTS’

Once this was done, FLDigi could control the radio and I could send and receive data modes. Tick!

Installing CQRLog

I did some research into Logging programs for Linux and CQRLog seemed to be the best for my needs. It’s a fairly basic logging program in concept, but it can use HAMLib to control a radio and integrates with FLDigi.

CQRLog is also marked as being “in progress” on the HRDLog site, which implies that it will integrate there as well at some point in the future. Once again, CQRLog was in the Ubuntu Software Centre so installation was a breeze.

Configuration was also pretty easy once I told it where the rigctld was installed (/usr/bin)

The only slightly tricky bit was getting my existing Log imported into CQRLog. This involved logging on to my account at HRDLog and then ‘restoring’ my log. This caused an ADIF file to be created, containing all my log entries. I then imported this into CQRLog.

Unfortunately, not all entries imported first time as CQRLog didn’t recognise some of the modes I have entries for (Domino, Thor and QPSK). I had to add these modes manually to CQRLog’s configuration preferences and then re-import the offending entries.

Once this was all done, I could simply tell CQRLog that it was operating the rig ‘remotely’ and it connected to FLDigi. Now, when I log a call in FLDigi, it gets stored in CQRLog.

After making a couple of Olivia contacts on 5MHz and 10MHz, I called it a day as we were going off to a New Year’s Eve party.

Using OpenIndiana as a file server

I’ve talked before about using ZFS as the basis for a file server that contains valuable information; like family photos and music collections. My first attempt at this was to use Ubuntu as the basis for the server and add the ubuntu-zfs package so that I could create and manage ZFS pools. The performance wasn’t great with this solution, so I’m taking another approach: using OpenIndiana to deliver a set of ZFS volumes to a separate installation of Debian (I’ve become disillusioned with Ubuntu of late. Basically, I don’t like the new UI).


  • The base hardware is the same as before, except that I am adding 2 off 64GB Solid State Disks (SSDs) and a 16GB SSD
  • VMware vSphere is installed on the 16GB SSD
  • A new Host Bus Adapter (HBA) with the current 2 off 3TB disks is passed through ESXi
  • OpenIndiana is installed into the first VM and configured to use the 2 off 64GB SSDs as a ZFS Mirror for its root pool
  • The existing ZFS pool is then imported into OpenIndiana and the appropriate volumes exported as NFS mounts
  • A new ZFS Volume is created to hold further VMs and exported back to ESXi as an NFS mountpoint
  • Debian is then installed into a new VM and used to serve the various NFS resources to the house clients.

This may sound a bit complex, but it gives some significant advantages:

  • ALL filesystems (apart from the very small ESXi filesystem) are created on redundant ZFS volumes.
  • Client operating systems like Debian (and Windows Server 2012) can benefit from the speed and integrity of ZFS without being aware of it.
  • There’s no need to add non-standard additions into Debian for it to take advantage of ZFS.


So far, I have been playing with OpenIndiana to get used to it. I’ve replaced the Ubuntu/ubuntu-zfs combination with OpenIndiana to gain speed and more functionality.

This is only a first step. Once I have the SSDs and a replacement for the existing HBA, I’ll install vSphere and build up.

HOWTO upgrade an Ubuntu zpool that has its root filesystem on ZFS

My attempts and final success at upgrading the disk space on my media server
Hard Disk

Why use ZFS

ZFS is, in my view, the best filesystem to use on a file server that is going to contain a large amount of data. This is a sweeping statement, but I think it is backed up by 1 2 and 3.


For the purposes of this HOWTO, I’ll assume the reader knows about vdevs, zpools and zfs filesystems. If not, take a look at this tutorial on the Oracle web site.

The media server at Chez Howell comprises an 64-bit Ubuntu 12.04 (Precise Pangolin) Linux distribution with the added ubuntu-zfs package. To get this far I followed the HOWTO on github.

The initial build had 2 x 1TB drives because that’s what I had available from my previous media server, a Fujitsu Scaleo Windows Home Server. However, that was never going to be enough, so I knew that I would need to upgrade the disks.

One of the beauties of ZFS is the ease with which you can expand a zpool (in theory anyway). In my case, I have two zpools. “rpool” is built from one vdev, a 160GB SATA drive. “rdata” is built from 2 x 1TB drives to provide a 1TB mirror. rpool has the root filesystem on it and rdata has all the media files.

In theory, to expand the size of rdata, all I needed to do was to shut down the system, replace one of the 1TB drives with a 3TB drive, power on and execute the command:

zpool replace rdata *old-drive-name* *replacement-drive-name*

This would break the mirror temporarily and then add the new drive. The system would then automatically “resilver” the mirror: i.e. copy all the data from the other old disk to the new disk and re-establish redundancy. I could then repeat the process for the other drive.

The magic occurs when I add the second drive. ZFS recognises that the zpool has two larger drives and
automatically expands the containing ZFS filesystems to fill the new capacity.

Note: in case you weren’t aware, you can have multiple filesystems on a zpool, each of which is only limited in size by the capacity of the largest underlying vdev. Unlike most filesystems, you don’t normally pre-define the size of the filesystem. You can set a quota on a filesystem if you need to, but it can be changed at any time. Neat huh?

Upgrade attempt one

Anyway, when I tried to do this, I hit a problem; based on a lack of foresight on my part when I created the rdata zpool.

When you create a zpool, one of the parameters to the zpool create command is ashift. ashift defines the size of the extents used to store data in the zpool. It defaults to the size reported by the vdev (i.e. disk) and must be the same for all vdevs in the zpool.

When I created rdata, ashift defaulted to 0 which means use the reported sector size of the vdevs: i.e. 512 bytes. This was the physical and logical sector size of the Western Digital disks used. The new Seagate 3TB disks use a 4k physical sector size (i.e. ashift=12) and thus cannot be added to the rdata zpool.

After some digging, I found this good explanation on the IBM website.

Upgrade attempt two

Thus I needed to change the approach. Rather than replacing the disks in the existing rdata zpool, I had to create a new zpool, copy the data across and then replace the zpools. The relevant info and instructions are:

# Power the system down, remove one element of rdata and install a new disk
# Power on
zpool create -o ashift=12 datapool *replacement-disk-1* #create a new temporary zpool called datapool
cd /
rsync -av /rdata /datapool #copy all data from rdata to datapool
zpool export rdata #export both zpools so the system forgets about them
zpool export datapool
zpool import datapool rdata #import the newly created zpool under a new name

At this point, there is a new rdata zpool with no redundancy. All I now needed to do was poweroff, replace the other disk and recreate a mirror. Unfortunately I hit another problem.

When the system was rebooted, it came back with the old rdata, even though I had exported it and replaced it with the renamed datapool. I was stumped for ages until I asked a question over on AskUbuntu. Luckily the answer came back fairly quickly (thanks jonathon).

Prior to rebooting, I needed to execute the following to update the Ubuntu RAMfs.

update-initramfs -c -k all

After I did this, I was able to poweroff, replace the disk, poweron and execute:

zpool attach rdata mirror *replacement-disk-1* *replacement-disk-2*

Which upgraded the zpool to a mirror using the two new disks and automatically resilvered the zpool. Two hours later and I had a redundant 3TB media pool.