Special vdev question

Hi there! I have what I hope are some relatively straightforward questions for the zfs gurus about expanding a special vdev. For a bit under a year I’ve been running a ~40TB rust pool (50% used) with mirrored 960GB NVME drives as a special vdev. I’ve been quite impressed with the performance and overall “snappiness” of this combination, and now I want more. I’d like to expand the size of my special vdev to 3.84T and set several of my more i/o constrained datasets so that their recordsize=special_small_blocks, thus moving all of the data from those datasets to NVME and effectively making my pool even more of a hybrid storage pool. Obviously I’ll need to recopy any existing data in these “special” datasets in order for it to be stored on the special vdev. My questions are as follows: can I simply do a zpool replace pool special-vdev-drive1 followed by zpool replace pool special-vdev-drive2? Assuming my pool has autoexpand=on set, will it then automagically recognize the additional space on the special vdev? Should I run a scrub after each replace or only after replacing both special vdev drives? Or does a special vdev resilver effectively act as a scrub?

I am of course also aware that should something go sideways while replacing the special vdev I lose my whole pool–I do have backups just in case.


Replacing drives one at a time with larger drives should work. I don’t know if that will be any different for a special vdev. Depending on the version of ZFS (I think) you might have to do something to trigger the resize.

I don’t think you need to kick off a scrub following resilvering. I thought it was automatic. IAC, the resilver will read all of the data from the original drive, performing all integrity checks as it does.

You will still have redundancy during the operation but some of it will be off line. Always wise to have backups current at all times.

1 Like

Yes, special vdevs expand like any other mirror vdevs do. You may need to use zpool online -e on the vdev to trigger the expansion after you replace the last small drive in the vdev, but that’s easy enough.

You don’t need to scrub after replacing vdev members, because the resilvering itself forces checksum validation on every block in the vdev. What you might want to consider is running a scrub before removing any member disk, so that you can make sure not to orphan any undetected corrupt blocks that you could have repaired, if you hadn’t already gotten rid of your only source of redundancy! :upside_down_face:


FWIW I spun up a ubuntu VM and tried this with some virtual disks. Everything worked, and it even auto-expanded the special vdev as y’all thought. Looks like I’ma give this a go as soon as I get my larger NVME drives here and installed! Thanks for the help!


To the extent that anyone wants any further evidence that this works as expected, I performed this in “production” on my home NAS, and once both smaller NVME drives in my special vdev were replaced, the zpool reported the additional space on the special vdev. Loving my bigger “hybrid” zfs pool.