ZFS send & receive disk exchange

I’m looking to disk exchange with friends running Truenas Scale (they might not have to, but this friend does). I’m curious about caveats and thoughts in regards to:

  • Locally preparing a ZFS encrypted pool with either one or two disks.
    • backing up my volume locally to save bandwidth
  • Exchanging those disks with a trusted friend who gives me the same.
  • Granting ZFS send access through wireguard.
    • should have at least 200mb/s bandwidth on each end.

Thanks for your thoughts!

I do exactly that to provide a remote backup at my son’s place about 5 hours drive away. Bandwidth is limited by my upload speed and daily backups take about ten minutes.

1 Like

I recently shipped a drive off to zfs.rent, a commercial service which puts it in a datacenter, gives me a VM, and I can ship zfs snapshots off to it. Similar to your use case but I’m paying them monthly for the IP, the electricity, etc. Since it’s an untrusted third party, I’m using ZFS encrypted datasets.

Before sending off my disk to them:

  1. Create a new pool rent and use syncoid to send all of my encrypted datasets from my home pool tank, a la syncoid -r --sendoptions="w" --no-sync-snap tank/dataset rent/dataset.
  2. Turn off auto mount on rent. I never plan to mount these disks on the remote machine or have my encryption key available over there, only send encrypted snapshots. zfs set canmount=noauto ...
  3. Ensure I’m using the same version of ZFS on both tank zpool and rent zpool. Not sure this one matters much, but I like having them on the same compatibility level. zpool set compatibility=openzfs-2.2-linux ...
  4. Hold a snapshot on tank and sync that to rent. Ensures I’ll have a common snapshot when my disk is mounted on the remote end with zfs.rent. zfs snapshot -r tank@hold-rent && zfs hold -r keep tank@hold-rent.
  5. Scrub rent one last time before sending it off.
  6. Export the pool zpool export rent, remove the disk from my local system, pack it nicely, ship it off.
  7. Pause or disable any reporting on failed syncs between tank and rent while the disk is in transit. Remove any sanoid snapshot policy I had setup on my local machine for rent.

Once my disk was mounted remotely and I had access to the VM:

  1. Normal VM setup stuff, ensure I can SSH in, firewall, yada yada.
  2. Import the pool. Ensure the pool is imported using disk ID’s zpool import -d /dev/disk/by-id -aN.
  3. Follow the instructions on the syncoid wiki about running without root. I’m doing a push from my homelab to zfs.rent so I granted the following permissions on the remote machine: sudo zfs allow -u myuser compression,mountpoint,create,mount,receive,rollback,destroy rent and added --no-privilege-elevation to my syncoid commands. If you’re doing a pull, the permissions needed by the user on the source machine are different.
  4. Ensure zpool scrubs are scheduled on the remote machine.
  5. Setup any alerting, snapshot policy, etc on the remote machine.
  6. Schedule syncoid to send from tank to rent (I’m using a cron job).

EDIT to add: Despite using native zfs encrypted datasets, the pool name and the dataset names are of course visible / not encrypted on the remote box. Something to keep in mind.

Fwiw I’m using Tailscale with some ACL magic to connect my homelab to my zfs.rent box.

So far, so good.

3 Likes