Best Practices for Two-Way ZFS Backup Setup: Utilizing Sanoid and Syncoid

Hi everyone,

I’m looking for some advice on setting up ZFS storage servers for personal use, with the goal of creating a two-way backup system using Sanoid and Syncoid.

Here’s what I have in mind:

  • I want to build two ZFS storage servers using Rust.
  • Each server will have locally mirrored vdevs for redundancy.
  • I also plan to implement mutual remote cold storage backup for added protection.

Initially, I thought I needed separate pools for production and backup data to keep them isolated. However, I’ve learned that separate datasets can achieve the same isolation.

I’m considering using Sanoid for managing snapshots and Syncoid for replication. But I’m not sure about the best way to set them up:

  • Is it correct to run Sanoid and Syncoid on both servers for push replication?
  • Should each server define its own snapshot/replication/pruning policy for production datasets and ignore backup datasets created by the other server?

Also, I’m looking for tips on dataset naming and hierarchy to make the configuration easier to understand. Are there any considerations to ensure that the local and remote tools don’t interfere with each other?

Any advice or suggestions would be greatly appreciated!

Thanks!

Not quite. Instead, the server on the backup side should use a Sanoid module with the backup or hotspare template. Those templates prune stale snapshots, but do not take new ones.

If you just ignore it entirely on the backup side, you’ll accumulate snapshots until you run out of space. If you run Sanoid with a production template on the backup side, it’ll attempt to take new snapshots locally, which either get wiped out every time new replication comes in from production (best case) or causes replication to be impossible without manual intervention, due to namespace collision between the two namespaces (very, very common when people make this mistake).

Name a dataset on each server “backup”. Name another dataset on each server the hostname of the other one–so, let’s say you’ve got servers Bob and Fred, each of which has a pool named “data”. Your dataset layout for Bob would look something like this:

data
|-------images
|         |-----VM1
|         |-----VM2
|
|-------files
|         |-----movies
|         |-----music
|
|--------backup
          |------Fred
                   |-----images
                   |       |---VM3
                   |       |---VM4
                   |
                   |------files
                           |----movies
                           |----music

You get the idea. This allows you to back up multiple sources onto a single target if necessary, without ever having the slightest confusion about what’s production data vs what’s backups, which backups belong to which system, etc.

You can also very easily set Sanoid policies that are machine-specific for your backups vs your production, backups of one machine vs backups of another, etc, and since you set them on a parent of the entire machine, you can just have them inherited for maximum simplicity and readability… or override them on a case-by-case basis if you want different handling of movies than of VM images on backups of Fred, etc etc. You get the idea!

Thanks! So each server is responsible to:

  1. create local snapshots of it’s local production data and to replicate them.
    For this I can use Sanoid with “template_production” for the local datasets.
    It will create and purge local snapshots - according to the config.
    And I can use Syncoid to push snapshots of relevant datasets to the remote.

  2. define a retention policy for backup data received from the remote.
    For this I can use Sanoid with “template_backup” for the backup datatsets.
    It does not create new snapshots (autosnap=no) but purges snapshots that were pushed by the remote - according to the config.

This way I can set up two (or more) ZFS servers that mutually backup each other.

Did I get this right?

Yes–although I will note that you can pull backups instead of pushing, and that’s the direction I generally recommend for security purposes.

(Counter intuitively, your production is generally the least secure part of your stack. Far, far less attack surface is necessary on a backup than is necessary for a production workload–and don’t get intimidated by the terminology; “my kids screwing around on laptops” qualify as “production!”)

2 Likes

Well, in a two-way remote backup system scenario both sides are production and backup at the same time. It’s turles all the way down! :slight_smile:

Anyways, this looks like a workable solution! Many thanks your advice!

Followup:

Two-way remote backup does not seem to be on their feature lists of TrueNAS Scale/Core or XigmaNAS. Could Sanoid and Syncoid be used conjunction with such solutions?

This is not about if they are great products or not but if the combination with Sanoid and Syncoid is possible and advisable!

You can absolutely use sanoid and syncoid on TrueNAS, and quite a few people do.

root@box2:~# syncoid -r root@box1:tank/box1 tank/backup/box1

root@box1:~# syncoid -r root@box2:tank/box2 tank/backup/box2

… and that’s the way you do two-way backup.

Great info here! And, you may also get some further insight from a similar thread from almost a year ago.

Similar topics and methods discussed.

And, to Jim’s point about ‘pulling’ backups, instead of pushing: I certainly understand your perspective with both sides being both primary and backup, but you can setup permissions on these datasets to limit your risk should one side be compromised in some way. Datasets can be secured, and the sync users can also have their access limited. It’s just a question of how much risk you think there is, and how far you want to go to be safe. If you have just these two copies of your data, and you value that data, it may be worth considering.

1 Like