Newbie Advice/Question: Migrating ext4 Drive to ZFS and Setting and recovery pool procedures

Many thanks for the advice. Manna from sysadmin heaven for everyone who responds :wink:

I’m using Debian Bookworm and the version of ZFS available in the repository.

I’m in the process of migrating my main Plex drive, which is currently a standalone ext4 volume. Financial constraints mean I only have two 10TB drives at the moment (one in use as the primary). The second will serve as a backup. Ideally, I’d configure them as a mirrored pool, but I’m hoping to extend the second drive’s lifespan by not keeping it running constantly. I’m not so concerned about redundancy, it’s not the end of the world if my media server goes down while I recover a disk.

I would greatly appreciate it if anyone could review the process I’ve outlined below and highlight any significant issues or suggest better approaches.

Create ZFS Backup Pool and Transfer

Step 1: I’ve created my backup pool and dataset on the backup drive, naming the pool “zfsbackup” and the dataset “spin” (a nod to spinning disk, though I admit the name could be improved).

Step 2: I’m currently using rsync to transfer everything from the primary ext4 drive to the backup pool. So far, so good. Easy.

Create New ZFS Pool and dataset for the Primary Disk (only one dataset)

Step 3: Unmount the ext4 primary disk (/dev/sdc /mnt/plex) and wipe the partition table on the primary disk (after shutting down processes that were using it).

Step 4: Create a new ZFS pool and dataset on the primary disk that was ext4. I might name it primarypool/plex (or something along those lines or something better…).

Step 5: Use zfs snapshot and zfs send to replicate the backup I made earlier to the primary pool.

Step 6: Use zfs create to establish the mount point for the new primary pool/dataset, ensuring it’s identical to the one used for the ext4 setup, and verify it mounts at boot (is this automatic, or do I need to configure it?).

At this point, I should be all set with the new ZFS pool mounted, ready for applications to use the same mount point seamlessly. Cool beans, right? (or did I miss something?)

The next part, concerning recovery, is where I’m a bit uncertain.


Let’s say the primary drive fails, but as a diligent home sysadmin, I’ve regularly connected the backup disk to perform snapshots and used zfs send to update my backup pool. Then, one day, the primary drive fails. Without the funds to purchase a new drive immediately, I need to use the backup as the primary.

What exactly would be the process to make that backup pool the primary pool? After shutting down the services that are affected by the failure of my primary disk, what steps should I take?

Do I need to create a new primary pool or rename my backup pool and change its mount point? Would I use something like zfs promote to make my backup the new primary?

FYI, the backup is connected via USB, but in the event of a failure, I’d remove it from the enclosure and connect it to the same SATA port as the failed drive.