Backstory: I have a 2x1TB mirror rpool (a single mirror vdev with 2x1TB SSDs) on my proxmox. This has basically all my LXCs and VMs and their data. (My “tank” is a separate zpool and not affected). It has been working great for 4 years. I am on Proxmox 6. (You can disagree, but I have been of the opinion if it ain’t broke, don’t major update). One of my mirror SSDs is failing. It is consumer grade and I want to replace it with an enterprise grade. The other SSD has all the same stats, so I am on borrowed time. I want to replace both. I will probably replace with 960GB enterprise. This is slightly smaller than the drives on the current mirror vdev.
I know that replacing a drive on a vdev with a smaller one is not easy. The suggestion I have seen is that I should create a new vdev mirror with the two new drives, add it to the rpool zpool, let it resilver to the new vdev, then remove the old mirror vdev. (Of course I would need to copy the boot partition, etc. I have seen that in the proxmox help and forum).
If I am going to go through that trouble, I figure I could just install proxmox fresh on the new drives. No reason to add to the old one.
If I do this, I assume that I then cannot just zfs send the datasets from the old drive to the new one? Since proxmox will be updated I assume that the LXCs won’t work right if I just copy the data sets. What all do I need to do to “backup” and import the LXCs and VMs and all their pools (the pvesm add stuff)?
I haven’t tried it, but I wouldn’t be too surprised if zfs send-ing to the new rpool worked without much mucking about. The containers and VMs are define in /etc/pve/lxc and /etc/pve/qemu. If you clone those directories to the new installation and the datastores have the same name (local-zfs by default), I can’t think of a reason they wouldn’t link up.
Another way to handle this would be to do the fresh install, add whatever datastore you’re using for backups and just restore from there.
Agreed. There might be some minor hitches and glitches along the way, I’m not sure; but you’re not destroying the source so it’s a pretty stress-free puzzle to solve IMO.
That’s a terrible suggestion, and it will leave you with a block reference tree that must be consulted on every single read from now until the end of time. That vdev removal feature is really only intended for removing accidentally added vdevs that haven’t had much data written to them yet; it will work on vdevs you’ve been using in production for many years, but there will be long-lasting consequences from doing so.
Boot into a rescue environment (which supports ZFS) so that your boot drive and Proxmox isn’t running.
Rsync whatever you need from the boot drive (assuming it’s non-ZFS) to the new boot drive
zfs-send the LXC/VM data (etc.) from the old zpool to the new zpool - you may need to tweak things so that the new zpool mounts properly on the new boot
Restart and all should be well.
Warning that the /etc/pve directory is a fuse mount - so copying that over is pointless. It is backed by a file (look in /var/lib/pv*) and it is important to get those files rsync when they are not mounted.
tl;dr rsync+zfs-send should do the trick (but from a rescue environment).
Yikes, okay. If I do end up going this route, what is the best way to replace the two SSDs in my vdev mirror with two more that are of a slightly smaller size? (Even if I do not go this route it would be helpful knowledge for me to know!)