Sanoid together with "traditional" backup solutions

Hi,
I’ve tried searching around to see how people are managing this kind of scenario but I haven’t been able to find any final answers.

As doing pure ZFS backups off site is still a bit of a niche unless you set up your own remote I’ve settled on combining ZFS and Sanoid with Kopia for off site backups to Backblaze B2. Ideally I’d like to point Kopia to snapshots for backing up the files in a known consistent state instead of folders potentially seeing live activity in them during backing up. I could probably do something with Kopia’s pre-hooks, but as I’m hosting that in a Docker container it would be quite the hassle.

As Sanoid is already running on the machine and dutifully making snapshots anyways I might as well just use those. However each snapshot has a unique name - presenting a bit of challenge when passing in volumes to Kopia in Docker.

Way back in the day I used an Rsync and hard link -based program called Back In Time for some machines that always created a symlink called “latest” pointing to - you guessed it - the latest snapshot for exactly these kinds of use cases. Does Sanoid allow something similar? Or is running something parallel to Sanoid that manages creation and deletion of a fixed named snapshot for this kind of use case better?

I thought there would be a bit more written about this as basing backups off of snapshots seems like the move - especially for databases.

Any thoughts? :smile:

It’s probably easier just to create an ephemeral snapshot specifically for your kopia backup, which you can immediately destroy on the local end once that backup finishes. So, a bash script along the lines of this pseudocode:

#!/bin/bash
zfs snapshot mypool/myds@1
kopia /mypool/myds/.zfs/snapshot/1 [remotetarget]
zfs destroy mypool/myds@1

I do this! Like mercenary_sysadmin said, I would suggest making your kopia backup a separate procedure–take a snapshot, run kopia, destroy the snapshot. I wouldn’t try to tie it in with Sanoid.

One thing I’d add: for nested datasets, you could consider taking the snapshot, bind mounting (or nullfs-mounting) all the .zfs/snapshot/name directories into one place, and then running kopia just once on that directory. That way, the directory tree in your kopia backup matches the layout of your ZFS datasets.

I have a script that that automates this process on FreeBSD: GitHub - neapsix/kopiatomic: Make atomic kopia backups with zfs and FreeBSD

Disclaimer: I’ve only done basic testing on this script, so be careful. The one I actually use is a similar script with restic instead of kopia (see the “restomic” branch). I can’t guarantee anything with the kopia version.

Also–heads up that you might hit weird edge cases running some backup software on ZFS snapshots. In my testing, kopia seemed to handle it OK, but I had to use a patched version of restic to get around an issue where it sees directories as modified every time you take a new snapshot.