Snapshots to a different zpool on the same host as part of a 3-2-1 backup strategy?


I am a ZFS noob planning to use it for the first time. Please forgive me if I do not use the correct terminology.

I have a few GBs of documents (plain text files, PDFs, etc.) that I want to store on an ODROID H3+.

The H3+ is populated with the following drives:

  • 500GB NVMe - OS Drive
  • 4TB HDD
  • 4TB HDD

Using this ODROID and a cloud storage provider, I’d like to get as close as possible to 3-2-1 backups + bit rot protection. I’m shooting for an RTO of 2-weeks and an RPO of 1-day. Performance / throughput is not a concern.

Here is my plan:

Using NixOS or Ubuntu, I was going to put ZFS on all three drives. The 500GB NVMe would be in a zpool by itself and the two 4TB disks would be in a mirrored zpool like this:


  • 500GB NVMe - OS Drive

zpool2 - mirrored

  • 4TB HDD
  • 4TB HDD

To achieve bit rot protection, I am planning to save the ‘original’ documents to zpool1 and set copies=3.

To achieve 3-2-1, I am planning to take frequent snapshots of the important directories on zpool1 and ‘send’ them to zpool2. Then, I will use Restic (or Borg?) to send encrypted backups to a cloud storage provider.

I’d be appreciative for any advice on how to optimize my ZFS usage, but here are some specific questions I have:

  1. Is it weird to send snapshots from one zpool to another on the same host?
  2. Will the size differences on the two pools cause problems when taking or restoring snapshots?
  3. If I go the snapshot route, do I need the set copies=3 on the important directories in zpool1, or would any corruption be caught and corrected during the snapshot send to zpool2?
  4. Instead of the snapshots, would it make more sense to rsync the files to zpool2 and take snapshots of zpool2? This should still protect against accidental deletion without needing to ‘send’ the snapshots.

Thanks in advance!

This sounds like a sensible scenario to me. Here’s what I think about those specific questions:

  1. It’s not weird at all to to send/recv between two zpools on the same host.
  2. I can’t foresee any trouble on this front.
  3. I’ll admit that I don’t have much experience with the whole set copies thing, but no, snapshots won’t do anything to correct errors on a single-disk zpool. That’s not the end of the world; it’ll just be like pretty much any other filesystem.
  4. You certainly could use rsync for this, but I don’t see any advantage over zfs send/recv. It’ll also take longer (rsync has to walk the whole filesystem at each time).

I’m using a setup similar to this, using zfs_autobackup.

It’s configured to take snapshots of my ZFS /home pool using the local tag, running this command with a systemd timer:

/root/.local/bin/zfs-autobackup -v --clear-mountpoint --keep-source="10,15min6h,1h1d,1d1w,1w1m,1m1y,1y5y" --keep-target="10,15min6h,1h1d,1d1w,1w1m,1m1y,1y5y" --snapshot-format "{}-%%Y-%%m-%%d-%%H:%%M:%%S" local tank/autobackup

The --keep-source and --keep-target flags configure the Thinner part of the tool, to ensure enough snapshots are kept on both the source and target pools. The --snapshot-format flag is used to make the snapshot names a bit more readable in snapshot listings.

The nice thing about zfs_autobackup is that it uses ZFS dataset properties to specify which datasets require backup. This allows for very granular control just by setting these properties. In my case, the property autobackup:local is set to true on all datasets that I want to snapshot with the command above. The local at the end of that command is the property name that zfs_autobackup looks for.

More information in the manual. Let me know if you need more info; happy to share systemd unit files if that helps.