Blog Archives

New ZFS based NAS and VM Host – part 3

In part 1 of this series, I covered the requirements and hardware. In part 2 I covered the initial configuration of the new server. In this part I’ll cover setting the server up as a file server.

The main use case for this new server is to be the main file and media server for our home. To achieve this I needed NFS, SMB and AFP access to the imported datasets.

NFS access is available by default in ZFS, but to get SMB and AFP access requires software to be installed. As indicated in part 2, you are strongly discouraged from installing software in the global zone. The supported approach is to create a new zone and install the software in there.
Zones are sort-of virtual machines in that each thinks it has exclusive use to hardware and are separate security containers. Where SmartOS differs from VMWare is its hybrid approach to supporting virtual machines. It does this by supporting multiple “brands” of zones:

  • “joyent” branded zones appear to be running SmartOS itself. They have no kernel of their own, they re-use the global zone’s kernel and just provide resource and security isolation.
  • “lx” branded zones appear to be running a specific version of Linux. As with “joyent” zones, they re-use the global zone’s kernel but translate the “brand”‘s system calls into those supported by SmartOS. This gets you the benefits of the software that normally runs on the brand, but without the overhead of having two run the brand’s kernel on top of the SmartOS kernel. The result is near bare-metal speeds. Currently (Sept 2015), SmartOS supports Ubuntu, Centos, Debian, Fedora (maybe others).
  • “kvm” branded zones are more like any other KVM virtual machine: allowing just about any other operating to be installed.

First attempt using an Ubuntu branded zone

This failed, so I’m not going into the detail

You could install Ubuntu in a kvm branded zone, but using an Ubuntu version of a lx zone avoids running two kernels. Base images for many Ubuntu variants exist in Joyent’s public repository, so I simply followed the instructions in the SmartOS wiki to:

  1. import the base Ubuntu 14.04 LTS server image
  2. Create a json file that describes the new zone
  3. Create the new zone using the json file.
    At the end of this, I had an Ubuntu 14.04 virtual machine called capella, on the same IP address as the old server and with direct access to the ZFS datasets containing the files from the old server.

I now followed the guide at outcoldman.comcoldman to install SAMBA and the guide at … to install Netatalk.
At the end, I had a functioning Samba server, but I had trouble with netatalk. My Macbook Air running Mavericks couldn’t connect to Capella using AFP. Investigation showed that the 14.04 version of lx-ubuntu was missing the security module that was needed to support Mavericks.
Note: SmartOS freely admit that branded zones are still being developed

Second attempt using a native SmartOS zone

Rather than spending too much time on this, I exploited one of the major advantages of using SmartOS. I simply deleted the zone, downloaded a basic joyent brand zone, created a new json file and created a new joyent branded zone. It took 5 minutes! I used the following json

"hostname": "",
"alias": "capella",
"brand": "joyent",
"max_physical_memory": 4096,
"image_uuid": "5c7d0d24-3475-11e5-8e67-27953a8b237e",
"resolvers": ["",""],
"nics": [
"nic_tag": "admin",
"ip": "",
"netmask": "",
"gateway": "",
"primary": "1"
"filesystems": [
"type": "lofs",
"source": "/data/media",
"target": "/import/media"
"type": "lofs",
"source": "/data/home",
"target": "/import/home"
"type": "lofs",
"source": "/data/home/git",
"target": "/import/home/git"
"type": "lofs",
"source": "/data/public",
"target": "/import/public"
"type": "lofs",
"source": "/data/software",
"target": "/import/software"

I then installed Samba and Netatalk as before. This time all was well and I now had a functioning NFS, SMB and AFP file server.

I reconfigured the clients to access the new server and I was back where I was before I changed hardware. Simples!

Next Step, install Plex media server, SABNZBD, CouchPotato and Sickbeard to create a fully functioning media server.

Back to Windows 7

Despite being a somewhat expert in Linux, I’ve gone back to Windows for my amateur radio PC. Linux was simply too much work to get going.

Partly this down to me having far less time to mess about with the PC rather than actually operating. The aspect however is that I know Ham Radio Deluxe intimately and I can;t get within 50% of what that does with available Linux progs: at least not without having a dozen different styles of UI.

It’s a great shame. There’s lots of ham radio software out there, but unless you are prepared to view the computer as an experimental platform, then stick with Windows. For me, the computer is a tool that enhances what I can do with the radio, but the radio is the focus.

Migrating Amateur Radio to Linux – Day One

This is the second in a short series on my experiences moving my Amateur Radio station from Windows to Linux, and later to Mac OS/X. The first part can be found at Migrating Amateur Radio to Linux, Part One – Requirements

Day One – Installing Linux, CQRLog and FLDigi

I opted for Ubuntu as the distribution. I’ve used many of the others, including the recent Mint, but I know Debian based distributions and I quite like the new UI used by Ubuntu as it’s similar to OS/X.


My target PC is an old Lenovo Thinkpad T60 that has Windows 7 installed on it. As the Thinkpad has no built in serial ports, I use a Belkin USB Serial Adapter, type F5U103. This is obsolete but Ubuntu has drivers built in for this device.

The serial port is connected to the RIGblaster Pro that interfaces the PC to the radios.

Audio is handled using a Plantronics headset USB adapter that I had laying about the shack. Again, Ubuntu has drivers for this.

Installing Ubuntu

I used GParted to shrink the Windows 7 partition to make space for Ubuntu. I created a 32GB root partition and a 2.5GB Swap partition and then installed the 32-bit version of Ubuntu 13.10 ‘Saucy Salamander’ as the T60 doesn’t have 64-bit support.

One advantage of installing this way is that I get full access to all the files on the Windows 7 partition as well.

Basic setup

After updating the operating system I installed my basic necessities:

The only program for which there isn’t a decent Linux client is Evernote. I use Evernote all the time and the absence of a decent Evernote client would be a big problem for me in my business life. It means I could never migrate 100% to Linux. That said, I can use the web client, and for my Ham activities, that’s fine.

Chromium and Dropbox were both installed from the Ubuntu Software Centre.

Installing FLDigi

FLDigi is one of the better known programs for the many digital modes that exist on the radio waves. It is actively developed and there is a well established support community. Installation was a breeze as it’s in the Ubuntu Software Centre.

Installing FLDigi also causes HamLib to be installed. HamLib is a community project to develop an API that can be used to control radios over a variety of physical media and communications protocols. Both my radios support Yaesu’s CAT communications protocol via the Rigblaster Pro and HamLib has an ‘alpha’ driver for the IC-756ProII.

Configuring FLDigi was a bit of a test as I needed to make sure I was using the correct tty device for the Belkin adapter and the correct audio devices for the Plantronics headset adapter. In my case, I opted to use the specific ‘/dev/serial/by-id’ device for the Belkin adapter to avoid a Linux ‘feature’ that can cause a USB device to be mapped to a different tty device if it is re-plugged for any reason. I then told FLDigi to use Hamlib to control the radio using this device.

I set FLDigi to use the PulseAudio system and used the appropriate Settings applet to select the correct audio.

The one area where I did need to play about was getting the Rigblaster to send audio to the radio. Initially, I told FLDigi to put the radio in TX mode using CAT commands, via Hamlib. The radio went into Tx mode OK, but no audio was received by the radio. This was because the Rigblaster needed to be told to switch the audio path from the microphone to the audio in feed from the PC. This needed RTS to be asserted by the PC.

I needed to tell FLDigi not to send ‘PTT via Hamlib’ but to use ‘Hardware PTT’ and a ‘separate serial port PTT’ and use ‘RTS’

Once this was done, FLDigi could control the radio and I could send and receive data modes. Tick!

Installing CQRLog

I did some research into Logging programs for Linux and CQRLog seemed to be the best for my needs. It’s a fairly basic logging program in concept, but it can use HAMLib to control a radio and integrates with FLDigi.

CQRLog is also marked as being “in progress” on the HRDLog site, which implies that it will integrate there as well at some point in the future. Once again, CQRLog was in the Ubuntu Software Centre so installation was a breeze.

Configuration was also pretty easy once I told it where the rigctld was installed (/usr/bin)

The only slightly tricky bit was getting my existing Log imported into CQRLog. This involved logging on to my account at HRDLog and then ‘restoring’ my log. This caused an ADIF file to be created, containing all my log entries. I then imported this into CQRLog.

Unfortunately, not all entries imported first time as CQRLog didn’t recognise some of the modes I have entries for (Domino, Thor and QPSK). I had to add these modes manually to CQRLog’s configuration preferences and then re-import the offending entries.

Once this was all done, I could simply tell CQRLog that it was operating the rig ‘remotely’ and it connected to FLDigi. Now, when I log a call in FLDigi, it gets stored in CQRLog.

After making a couple of Olivia contacts on 5MHz and 10MHz, I called it a day as we were going off to a New Year’s Eve party.

Migrating Amateur Radio to Linux, Part One – Requirements

I’ve been a user, developer and system administrator for Linux based systems, and prior to that, Unix based systems, for years. I think my first contact was with a Unix Version 6 system at my second employer, ITT IDEC back in 1980. It was actually Interactive’s IS/1. Since that time, I’ve used just about every variant of Bell Labs’ most famous operating system. Oddly enough though, I haven’t tried to use it for my Amateur Radio hobby before.

This is the start of a short series on how I’m moving from a purely Windows based operating environment to one that is equally usable on Linux and Mac OS/X. I’m targeting OS/X as well because my own personal laptop is a beautiful, but aging, Macbook Air that goes with me everywhere. As I am likely to start doing a lot of business travelling very shortly, I want to be able to take my radios with me, and that means running my digital modes software on the Mac as well as on Linux. (I don’t use my Macbook in the shack as the shack PC is dedicated to amateur radio operating).


So that I can monitor my progress, I’m starting by setting some objectives.

High Level

  • The main objective is to have the same capabilities on Linux and OS/X as I currently have on Windows.
  • The second, less obvious consequence is to be able to operate across both, or even all three, platforms and keep each platform in sync.
  • Given the growing penetration of mobile devices, I’m adding a third objective of being able to operate on Android and iOS devices with the same fidelity of information.

The need to maintain fidelity more or less imposes the need to integrate with some form of Cloud based services. I already use Dropbox to keep files in sync across my various Windows, Mac and Linux laptops and with my iPhone and iPad, so that’s an obvious one to use to keep some resources in sync. The other main integration point would be my Station Log.

All licensed amateur radio stations are required to keep a comprehensive log of all contacts made and all stations worked. If I could find a common logging program across all platforms that used a flat file for storage, then I could use Dropbox to sync this. However, there are a number of Cloud based logging platforms now and as I already use HRDLog on Windows and iOS, it makes sense to see if I can use this on Linux and OS/X.


The first priority is to migrate from Windows to Linux, so my first objective is:

To create a Linux operating environment that provides the same capabilities as my existing Windows environment, in a form that is portable (at a functional level at least) to OS/X and which keeps operating data synchronised across multiple devices

Required capabilities

My current Windows environment provides the following capabilities:

  • Local logging, using Ham Radio Deluxe version 5
  • Multiple digital modes using the same
  • multiple rig control (Icom IC-756ProII and Yaesu FT-817) integrated with the above, using HRD
  • DXCluster access with spotting and customisable filtering, using HRD
  • Integration of the local log with HRDLog, eQSL and LOTW
  • Propagation monitoring, using Afreet’s Ionoprobe
  • Accurate time synchronisation, using Dimension 4
  • WSPR and JT65A protocol support
  • Echolink support

As you can see, the main requirement is for a replacement for HRD. HRD is an amazing piece of software, but it ceased to be freeware a while ago and while I have no problem paying for good software, it has stimulated this re-appraisal.

Anybody who knows Linux, knows that the Unix approach is to construct small, single purpose tools and then use the operating system capabilities to string them together to form tool chains. This contrasts with the Windows and Mac approach of constructing full function software packages. I have seen the merits of both approaches in the appropriate circumstances, so I’m not going to argue that one approach is better; but I am assuming I will need to adopt the mix and match approach with Linux.

More to follow…

HOWTO upgrade an Ubuntu zpool that has its root filesystem on ZFS

My attempts and final success at upgrading the disk space on my media server
Hard Disk

Why use ZFS

ZFS is, in my view, the best filesystem to use on a file server that is going to contain a large amount of data. This is a sweeping statement, but I think it is backed up by 1 2 and 3.


For the purposes of this HOWTO, I’ll assume the reader knows about vdevs, zpools and zfs filesystems. If not, take a look at this tutorial on the Oracle web site.

The media server at Chez Howell comprises an 64-bit Ubuntu 12.04 (Precise Pangolin) Linux distribution with the added ubuntu-zfs package. To get this far I followed the HOWTO on github.

The initial build had 2 x 1TB drives because that’s what I had available from my previous media server, a Fujitsu Scaleo Windows Home Server. However, that was never going to be enough, so I knew that I would need to upgrade the disks.

One of the beauties of ZFS is the ease with which you can expand a zpool (in theory anyway). In my case, I have two zpools. “rpool” is built from one vdev, a 160GB SATA drive. “rdata” is built from 2 x 1TB drives to provide a 1TB mirror. rpool has the root filesystem on it and rdata has all the media files.

In theory, to expand the size of rdata, all I needed to do was to shut down the system, replace one of the 1TB drives with a 3TB drive, power on and execute the command:

zpool replace rdata *old-drive-name* *replacement-drive-name*

This would break the mirror temporarily and then add the new drive. The system would then automatically “resilver” the mirror: i.e. copy all the data from the other old disk to the new disk and re-establish redundancy. I could then repeat the process for the other drive.

The magic occurs when I add the second drive. ZFS recognises that the zpool has two larger drives and
automatically expands the containing ZFS filesystems to fill the new capacity.

Note: in case you weren’t aware, you can have multiple filesystems on a zpool, each of which is only limited in size by the capacity of the largest underlying vdev. Unlike most filesystems, you don’t normally pre-define the size of the filesystem. You can set a quota on a filesystem if you need to, but it can be changed at any time. Neat huh?

Upgrade attempt one

Anyway, when I tried to do this, I hit a problem; based on a lack of foresight on my part when I created the rdata zpool.

When you create a zpool, one of the parameters to the zpool create command is ashift. ashift defines the size of the extents used to store data in the zpool. It defaults to the size reported by the vdev (i.e. disk) and must be the same for all vdevs in the zpool.

When I created rdata, ashift defaulted to 0 which means use the reported sector size of the vdevs: i.e. 512 bytes. This was the physical and logical sector size of the Western Digital disks used. The new Seagate 3TB disks use a 4k physical sector size (i.e. ashift=12) and thus cannot be added to the rdata zpool.

After some digging, I found this good explanation on the IBM website.

Upgrade attempt two

Thus I needed to change the approach. Rather than replacing the disks in the existing rdata zpool, I had to create a new zpool, copy the data across and then replace the zpools. The relevant info and instructions are:

# Power the system down, remove one element of rdata and install a new disk
# Power on
zpool create -o ashift=12 datapool *replacement-disk-1* #create a new temporary zpool called datapool
cd /
rsync -av /rdata /datapool #copy all data from rdata to datapool
zpool export rdata #export both zpools so the system forgets about them
zpool export datapool
zpool import datapool rdata #import the newly created zpool under a new name

At this point, there is a new rdata zpool with no redundancy. All I now needed to do was poweroff, replace the other disk and recreate a mirror. Unfortunately I hit another problem.

When the system was rebooted, it came back with the old rdata, even though I had exported it and replaced it with the renamed datapool. I was stumped for ages until I asked a question over on AskUbuntu. Luckily the answer came back fairly quickly (thanks jonathon).

Prior to rebooting, I needed to execute the following to update the Ubuntu RAMfs.

update-initramfs -c -k all

After I did this, I was able to poweroff, replace the disk, poweron and execute:

zpool attach rdata mirror *replacement-disk-1* *replacement-disk-2*

Which upgraded the zpool to a mirror using the two new disks and automatically resilvered the zpool. Two hours later and I had a redundant 3TB media pool.