Why use ZFS
For the purposes of this HOWTO, I’ll assume the reader knows about vdevs, zpools and zfs filesystems. If not, take a look at this tutorial on the Oracle web site.
The initial build had 2 x 1TB drives because that’s what I had available from my previous media server, a Fujitsu Scaleo Windows Home Server. However, that was never going to be enough, so I knew that I would need to upgrade the disks.
One of the beauties of ZFS is the ease with which you can expand a zpool (in theory anyway). In my case, I have two zpools. “rpool” is built from one vdev, a 160GB SATA drive. “rdata” is built from 2 x 1TB drives to provide a 1TB mirror. rpool has the root filesystem on it and rdata has all the media files.
In theory, to expand the size of rdata, all I needed to do was to shut down the system, replace one of the 1TB drives with a 3TB drive, power on and execute the command:
zpool replace rdata *old-drive-name* *replacement-drive-name*
This would break the mirror temporarily and then add the new drive. The system would then automatically “resilver” the mirror: i.e. copy all the data from the other old disk to the new disk and re-establish redundancy. I could then repeat the process for the other drive.
The magic occurs when I add the second drive. ZFS recognises that the zpool has two larger drives and
automatically expands the containing ZFS filesystems to fill the new capacity.
Note: in case you weren’t aware, you can have multiple filesystems on a zpool, each of which is only limited in size by the capacity of the largest underlying vdev. Unlike most filesystems, you don’t normally pre-define the size of the filesystem. You can set a quota on a filesystem if you need to, but it can be changed at any time. Neat huh?
Upgrade attempt one
Anyway, when I tried to do this, I hit a problem; based on a lack of foresight on my part when I created the rdata zpool.
When you create a zpool, one of the parameters to the
zpool create command is
ashift. ashift defines the size of the extents used to store data in the zpool. It defaults to the size reported by the vdev (i.e. disk) and must be the same for all vdevs in the zpool.
When I created rdata, ashift defaulted to 0 which means use the reported sector size of the vdevs: i.e. 512 bytes. This was the physical and logical sector size of the Western Digital disks used. The new Seagate 3TB disks use a 4k physical sector size (i.e. ashift=12) and thus cannot be added to the rdata zpool.
After some digging, I found this good explanation on the IBM website.
Upgrade attempt two
Thus I needed to change the approach. Rather than replacing the disks in the existing rdata zpool, I had to create a new zpool, copy the data across and then replace the zpools. The relevant info and instructions are:
# Power the system down, remove one element of rdata and install a new disk # Power on zpool create -o ashift=12 datapool *replacement-disk-1* #create a new temporary zpool called datapool cd / rsync -av /rdata /datapool #copy all data from rdata to datapool zpool export rdata #export both zpools so the system forgets about them zpool export datapool zpool import datapool rdata #import the newly created zpool under a new name
At this point, there is a new rdata zpool with no redundancy. All I now needed to do was poweroff, replace the other disk and recreate a mirror. Unfortunately I hit another problem.
When the system was rebooted, it came back with the old rdata, even though I had exported it and replaced it with the renamed datapool. I was stumped for ages until I asked a question over on AskUbuntu. Luckily the answer came back fairly quickly (thanks jonathon).
Prior to rebooting, I needed to execute the following to update the Ubuntu RAMfs.
update-initramfs -c -k all update-grub`
After I did this, I was able to poweroff, replace the disk, poweron and execute:
zpool attach rdata mirror *replacement-disk-1* *replacement-disk-2*
Which upgraded the zpool to a mirror using the two new disks and automatically resilvered the zpool. Two hours later and I had a redundant 3TB media pool.