Upgrading the disk space in my ZFS-based servers – pt 3

Part 1, Part 2
It was now time to try the same recipe on the main server: deneb

On Deneb

root@deneb ~ $ zpool status
pool: zones
state: ONLINE
scan: scrub repaired 0 in 7h21m with 0 errors on Fri May 19 18:22:48 2017
config:

NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
logs
c1t4d0 ONLINE 0 0 0

errors: No known data errors

Downgrade one of the mirrors to make room for a new 4TB disk that can be used as a temporary store for deneb‘s data.

root@deneb ~ $ zpool detach zones c1t3d0
root@deneb ~ $ zpool status
pool: zones
state: ONLINE
scan: scrub repaired 0 in 7h21m with 0 errors on Fri May 19 18:22:48 2017
config:

NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
logs
c1t4d0 ONLINE 0 0 0

errors: No known data errors
root@deneb ~ $ poweroff
poweroff: Halting 9 zones.

I removed disk 4 and installed the third new 4TB disk in its place.

root@deneb ~ $ zpool status
pool: zones
state: ONLINE
scan: scrub repaired 0 in 7h21m with 0 errors on Fri May 19 18:22:48 2017
config:

NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
logs
c1t4d0 ONLINE 0 0 0

errors: No known data errors
root@deneb ~ $ zpool create newzones c1t3d0
root@deneb ~ $ zpool status
pool: newzones
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
newzones ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0

errors: No known data errors

pool: zones
state: ONLINE
scan: scrub repaired 0 in 7h21m with 0 errors on Fri May 19 18:22:48 2017
config:

NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
logs
c1t4d0 ONLINE 0 0 0

errors: No known data errors

Now I can clone zones on to newzones

root@deneb ~ $ zfs snapshot -r zones@txfr1
root@deneb ~ $ zfs send -R zones@txfr1 | zfs recv -F newzones

This took a long time!

Upgrading the disk space in my ZFS-based servers – pt 2

See here for part 1

On Eridani

Once the initial send/recv had completed, I did another snapshot and sent the incremental data just in case anything had changed.

root@eridani ~ $ zfs snapshot -r zones@txfr2
root@eridani ~ $ zfs send -R -i txfr zones@txfr2 | pv | zfs recv tempzone

Lastly, I promoted the incremental snapshot to be the current version of tempzone

root@eridani ~ $ zfs rollback tempzone@txfr2

As a final (paranoid) check, I ran a dummy rsync task to check if /zones was the same as /tempzone

root@eridani ~ $ rsync -avn /zones/ /tempzone/ | less
sending incremental file list
global/cores/

sent 765011152 bytes  received 2764561 bytes  486087.82 bytes/sec
total size is 3454738169591  speedup is 4499.67 (DRY RUN)

Nothing had changed, so I could now swap the pools over

root@eridani ~ $ zpool export tempzone

GOTCHA!

I couldn’t export the root zones pool at this point because it had mounted filesystems that were in use by the running system. To get further I had to reboot the system into restore/recovery mode.

In this mode, no pools are imported, so I could execute the following commands

zpool status - to confirm that no pools were mounted
zpool import - to see what pools were available
zpool import -NR /t1 tempzone zones - -NR to avoid any datasets being mounted and use an alternate mount point
zpool import -NR /t2  zones oldzones
zpool export oldzones
zpool export zones
reboot

I was still getting errors even though the pools had been renamed. The problem turned out to be that when SmartOS boots, it seems to mount the pools in alphabetical order.

Probably more likely that SmartOS scans the disks in alphabetical order

Thus, oldzones was mounted before zones and it’s datasets were grabbing the mount points.
Rather than laboriously change the mountpoint property on all the datasets, I simply disconnected the disk.

Once I had completed this, eridani booted using the new pool.

root@eridani ~ $ 
root@eridani ~ $ zpool status
  pool: zones
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    zones       ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        c2d1    ONLINE       0     0     0
        c3d0    ONLINE       0     0     0
        c3d1    ONLINE       0     0     0

errors: No known data errors
root@eridani ~ $ zfs list
NAME                                                                  USED  AVAIL  REFER  MOUNTPOINT
zones                                                                2.52T  2.73T   318K  /zones
zones/archive                                                        29.3K  2.73T  29.3K  none
zones/backup                                                         2.50T  2.73T  29.3K  /zones/backup
zones/backup/deneb                                                   2.50T  2.73T  29.3K  /zones/backup/deneb
zones/backup/deneb/zones                                             2.50T  2.73T   324K  /zones/backup/deneb/zones
...
zones/config                                                         55.9K  2.73T  36.0K  legacy
zones/cores                                                          58.6K  2.73T  29.3K  none
zones/cores/global                                                   29.3K  10.0G  29.3K  /zones/global/cores
zones/dump                                                           1023M  2.73T  1023M  -
zones/opt                                                             423M  2.73T   422M  legacy
zones/swap                                                           17.9G  2.74T  1.44G  -
zones/usbkey                                                         38.6K  2.73T  38.6K  legacy
zones/var                                                            7.08M  2.73T  5.42M  legacy
root@eridani ~ $ zfs mount
zones                           /zones
zones/backup                    /zones/backup
zones/backup/deneb              /zones/backup/deneb
zones/backup/deneb/zones        /zones/backup/deneb/zones
...
zones/backup/deneb/zones/usbkey  /zones/backup/deneb/zones/usbkey
zones/backup/deneb/zones/var    /zones/backup/deneb/zones/var
zones/cores/global              /zones/global/cores
zones/var                       /var
zones/config                    /etc/zones
zones/opt                       /opt
zones/usbkey                    /usbkey
root@eridani ~ $