Upgrading the disk space in my ZFS-based servers – pt 1

Introduction

Back in 2016 I bought a Supermicro server and installed SMARTOS to create a very capable home server. I named this server Deneb. See my posts on my new ZFS home server for more details.
Some time later, I bought an old Dell T310 to act as a backup device for Deneb. I named this server Eridani and use ZnapZend2 to perform 4-hourly snapshots and nightly backups of Deneb to Eridani.
All has been working well, but I am now approaching the point where I am running out of space on Eridani and Deneb is not far off a similar position.

  • Eridani is close to full capacity (>90%)
  • Deneb is wasteful in its use of disk. It uses two dual-disk mirrors instead of raidz1
  • Deneb is also at 81% (3.62 total)

Objectives

After some consideration, I set the objectives as being to aim for 3-5 years of space on both servers, and to achieve this partly by moving to raidz1 to improve space utilisation.

Current Configurations

Deneb

  • Supermicro server, X10SDV-TLN4F motherboard, 32GB ECC RAM, single XEON
  • 4 x HGST Ultrastor 2TB 7200rpm SATA disks
  • 2 x mirrored pairs
  • 4TB in total

Eridani

  • Dell T301 server, single Xeon, 16GB ECC RAM
  • 2 x 3TB disks in mirror
  • 3TB in total

Strategy

  • Move Deneb and Eridani to raidz1
  • Increase Deneb from 4TB to 6TB (4 x 2TB disks in raidz1 = 6TB)
  • Increase Eridani to 9TB (3 x 4TB + 1 x 3TB disks in raidz1 = 9TB

At a later point, I may replace the 3TB disks to give 12TB.

In addition, I proposed to move a few zones from Deneb to Eridani. Deneb is currently used to backup a number of other computers in the house.

  • veeam is the backup target for the Windows computers in the house. They all run veeam nightly.
  • cfg-backups runs rsnapshot to backup a number of Raspberry Pi computers in the house.
  • Lastly, dev-backups contains dd images of the Raspberry Pi computers that can be loaded by NOOBS3
    Moving these to Eridani would free 700G on Deneb without increasing space usage on Eridani.

High Level Plan

Part 1 – Eridani

  1. Buy 3 new 4TB disks
  2. Split the mirror on Eridani to free one of the 3TB disks
  3. Create a new 6TB raidz1 pool with the freed 3TB disk and 2 of the new 4TB disks
  4. Migrate the existing pool to the new pool
  5. Destroy the old pool and add its 3TB to the new pool, making 9TB

Part 2 – Deneb

  1. Split one of the mirrors on Deneb and replace the detached disk with the third 4TB disk
  2. Create a new temporary pool containing just the 4TB disk
  3. Migrate the existing pool to the temporary pool
  4. Destroy the old pool and rebuild it as raidz1, giving 4TB
  5. Migrate the data back to the newly built pool
  6. Destroy the temporary pool and replace the 4TB disk with the original 2TB disk
  7. Add the 2TB disk into the raidz1 pool to make 6TB
  8. Keep the 4TB disk as a spare.

Implementation

Prepare Eridani

The first task was to downgrade the existing mirrored zpool to be a single disk so that I could re-use the second disk as part of the new raidz1 pool.

After logging in to Eridani…

  1. Check the status of the pool
root@eridani ~ $ zpool status
pool: zones
state: ONLINE
scan: scrub repaired 0 in 9h1m with 0 errors on Fri Sep 22 05:01:02 2017
config:

NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c2d0 ONLINE 0 0 0
c3d0 ONLINE 0 0 0

errors: No known data errors
  1. Detach the second HD to make room for a new 4TB disk
root@eridani ~ $ zpool detach zones c3d0
root@eridani ~ $ zpool status
pool: zones
state: ONLINE
scan: scrub repaired 0 in 9h1m with 0 errors on Fri Sep 22 05:01:02 2017
config:

NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
c2d0 ONLINE 0 0 0

errors: No known data errors
root@eridani ~ $ poweroff

Upgrade the disks

With the power off, I installed two new 4TB HGST Deskstar NAS disks into the empty slots in the T310. These will appear as c3d1 and c4d0.

Adjust the BIOS settings to recognise the new disks

The T310 originally came with a PERC 6/i RAID controller. Unfortunately, the 6/i only supports 2TB disks or smaller, so I had to ditch it and use the SATA ports on the motherboard itself. This was not a great loss, as I wasn’t going to make use of the RAID capabilities anyway, but it did throw me a curve ball when I originally got the T310.

SMARTOS runs from an external USB drive rather than from the internal disks. This is not the default behaviour and has to be forced by adjusting the BIOS settings. Adding new disks resets the boot sequence to the default.

I accessed the T310 via the iDRAC and intercepted the boot sequence with <F2>. I then went into SATA Settings to check that the new disks had been registered. I then went into Boot Sequence and modified it to reboot from the external USB.

Reconfigure Eridani’s disks

root@eridani ~ $ zpool status
pool: zones
state: ONLINE
scan: scrub repaired 0 in 9h1m with 0 errors on Fri Sep 22 05:01:02 2017
config:

NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
c2d0 ONLINE 0 0 0

errors: No known data errors
root@eridani ~ $ zpool create tempzone raidz1 c2d1 c3d0 c3d1
root@eridani ~ $ zpool status
pool: tempzone
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
tempzone ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c2d1 ONLINE 0 0 0
c3d0 ONLINE 0 0 0
c3d1 ONLINE 0 0 0

errors: No known data errors

pool: zones
state: ONLINE
scan: scrub repaired 0 in 9h1m with 0 errors on Fri Sep 22 05:01:02 2017
config:

NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
c2d0 ONLINE 0 0 0

errors: No known data errors
root@eridani ~ $ zfs snapshot -r zones@txfr
root@eridani ~ $ zfs send -R zones@txfr | pv | zfs recv -F tempzone

Note: I used pv so I could see progress.

To be continued

Author: Gareth Howell

IT Professional, radio amateur, scout leader and beer drinker. I enjoy helping small business use their limited IT budgets to best effect

3 thoughts on “Upgrading the disk space in my ZFS-based servers – pt 1”

  1. Pingback: - Gareth Howell

Comments are closed.