If you have the spare SATA ports, I would always use zfs send | zfs receive . This is one of the exact use-cases it’s designed for, and it will write the data nice and evenly on the receiving end.
I have enough space in mirror-1 so I’d like to remove mirror-0. Are there any downsides to zpool remove tank mirror-0?
I did a dry run:
zpool remove -n tank mirror-0
Memory that will be used after removing mirror-0: 45.6M
BTW what’s the etiquette here? I searched for zpool remove and this thread had my question but it wasn’t fully answered so I jumped onto it. Would it have been better to create a new thread?
Everything on disk-0 is mirrored on disk-2, and everything on disk-1 is mirrored on disk-3. The data is striped across mirror-0 and mirror-1. Removing either mirror-0 or mirror-1 would result in half of the pool missing.
If you want to remove disks, you would want to detach disk-2 from disk-0, and detach disk-3 from disk-1
Edit: Looks like removing a mirror moves the data from the removed vdev to the remaining vdev(s).
The result looks like this (tested with loop devices):
Don’t do it. If you do, every single block that was stored on the removed vdev will now have an indirection link to it. When you ask for those blocks, you’ll still ask for them on the missing vdev, the pool will do a lookup to see that block has moved to the remaining vdev, and THEN you’ll get the actual block.
You’ll also have noticeably undersized metaslabs on that one vdev, if you replace 6TB drives with 18TB drives. The metaslab size is determined at vdev creation time and never updated. This is probably not going to be a big enough deal to matter, if it were only a case of “I need to expand this one vdev from 6TB drives to 18TB drives,” but you’ve got more going on here than that.
In contrast to all this, if you simply set up a new pool with a single mirror vdev and then replicate all your data in, you’ll have the proper sized metaslabs for your big new drives, you won’t have any weird indirect links, everything will just be perfectly nice the way it’s meant to be.
Thanks for explaining that, so it’s not just a small memory overhead!
I only have four sata ports available so would you break the 18tb mirror such that I can create a new zpool with a single 18TB drive and then replicate tank to that. Once done, destroy tank and attach the other 18TB drive? Now I have my two 6TB drives and a single mirror zpool. This is my backup device so worst case scenario I could replicate from prod and just lose some older snapshots.
Yup, that’s exactly right. Don’t forget to scrub the pool before breaking one of the mirrors. Then scrub the new pool after you finish replicating, before you destroy the old one.