Update pool with syncoid to remote host

I’ve only ever used Syncoid to backup or archive a pool to another (empty) pool.

I am now trying to sync changes to a remote machine. I find it difficult to figure out from the docs how to do update after synced pool.

The old pool and new pool were in the same machine, I used syncoid to sync the old to new.

I then took the disks from the new pool and placed them in another server.

Changes occurred on old and I want to sync that to the new pool in the remote server.

When I run syncoid -r backup/tank newserver:tank I get all kinds of nasty errors but it seems to send over something.

Do I need to make new snapshots manually, and send that over with syncoid such as backup/tank@changes?

Also, if I wish to automate this, would it suffice to configure that in sanoid?

Thanks!

Can you offer more detail here?

Syncoid can do this, for sure, but the pathing needs to be right and the two pools need to share some snaps. If you think of this another way, aside from the initial syncoid run, syncoid always keeps two existing pools sync’d. That’s it’s whole purpose.

If you’re able to iron out any errors/issues you’re seeing this should work just fine.

If you could put some logging here you may get some better input…

1 Like

Errors:

# syncoid -r backup/tank/backup/icy-box remotehost:tank/backup/icy-box

CRITICAL ERROR: Target tank/backup/icy-box exists but has no snapshots matching with backup/tank/backup/icy-box!
                Replication to target would require destroying existing
                target. Cowardly refusing to destroy your existing target.


CRITICAL ERROR: Target tank/backup/icy-box/backup exists but has no snapshots matching with backup/tank/backup/icy-box/backup!
                Replication to target would require destroying existing
                target. Cowardly refusing to destroy your existing target.


CRITICAL ERROR: Target tank/backup/icy-box/backup/snapshot exists but has no snapshots matching with backup/tank/backup/icy-box/backup/snapshot!
                Replication to target would require destroying existing
                target. Cowardly refusing to destroy your existing target.


CRITICAL ERROR: Target tank/backup/icy-box/backup/syncthing exists but has no snapshots matching with backup/tank/backup/icy-box/backup/syncthing!
                Replication to target would require destroying existing
                target. Cowardly refusing to destroy your existing target.


CRITICAL ERROR: Target tank/backup/icy-box/data exists but has no snapshots matching with backup/tank/backup/icy-box/data!
                Replication to target would require destroying existing
                target. Cowardly refusing to destroy your existing target.


CRITICAL ERROR: Target tank/backup/icy-box/data/home exists but has no snapshots matching with backup/tank/backup/icy-box/data/home!
                Replication to target would require destroying existing
                target. Cowardly refusing to destroy your existing target.

Sending incremental backup/tank/backup/icy-box/data/music@syncoid_localhost_2023-07-12:15:04:58-GMT02:00 ... syncoid_localhost_2023-07-12:17:37:04-GMT02:00 (~ 4 KB):
2.13KiB 0:00:00 [6.48KiB/s] [==================================================>                                               ] 53%            
root@localhost ~ #

I get that it can’t sync an entire dataset with subsets to a location that already has parts synced. I need to tell it to sync only new stuff. From the docs, I can’t make out how to do that.

For now, I simply synced that dataset to a non-existing dataset on the target host. It synced.

For the future, I need to figure out how I can setup a “set and forget” syncoid plan.

Why do they not have matching snapshots? Was the common snapshot deleted?

If it is not working because the old snapshot has been pruned/destroyed, you can use an existing snapshot on the target as an origin by specifying -o origin=<snapshot_name>.

If there are new files on both old and new pools, unless you’ve segregated them by dataset, you can’t really “merge” them to keep new files from both. syncoid/zfs send replicate at the dataset level. If that’s what you want, it would be more a job for rsync.

1 Like

Thanks, appreciate the help.

Thats a good question, I think I just skimmed that message and didn’t really think about what it said, but I will look into it.

I think I will create a separate area on a pool and experiment until I understand what happens and comfortable to actually use it for archiving / backup purposes. Right now I simply don’t understand what is happening, why I get error messages and even when it works, I don’t trust its actually doing what I expect. I’ve lost too many things in the past by making wrong assumptions.

No problem; that sounds great. You can check out the snapshots with sudo zfs list -t snapshot if you haven’t done so already. I think practicing with test data is a good idea.

I get that it can’t sync an entire dataset with subsets to a location that already has parts synced. I need to tell it to sync only new stuff.

If I understand what you’re asking for correctly here–that you want new things from machineA:poolA/dataset to show up on machineB:poolB/dataset without undoing any local changes made on machineB:poolB/dataset–it can’t be done with ZFS replication (which is what syncoid orchestrates).

ZFS replication is snapshot-based. So, let’s start out with your first full replication from mA:poolA/dataset to mB:poolB/dataset.

root@B:~# syncoid -r mA:poolA/dataset poolB/dataset

The first thing that happens here is syncoid takes a snapshot of poolA/dataset. The snapshot it takes will be named with the hostname and the current date, but for simplicity, let’s just call it “@1.”

Next, syncoid replicates the @1 snapshot from mA to mB, with a command that looks like this:

root@mB:~# ssh root@mA “zfs send -r poolA/dataset@1” | zfs receive poolB/dataset

After this finishes, you have a copy of mA:poolA/dataset@1 in place and mounted on mB, as poolB/dataset.

Now, you do more stuff on both machines locally. Then, you run syncoid again. This time, it’s an incremental replication, because you have @1 on both machines. First, syncoid takes a new snapshot, which again for simplicity’s sake we’ll just call @2.

Then, it replicates. Again, the actual command it uses looks something like this:

root@mB:~# ssh root@mA “zfs send -rI poolA/dataset@1 poolA/dataset@2” | zfs receive poolB/dataset

Here’s the kicker, though: in order for mB:poolB/dataset to receive this incremental snapshot, it must wipe out any local changes made–including snapshots taken locally. @1 proceeds directly to @2, and that’s that.

So you lose anything you did locally on machineB:poolB/dataset when you replicate in from machineA:poolA/dataset. There is no way around this, it’s a necessary part of how replication works.

Does this answer your question? Or am I misinterpreting it?

3 Likes

Thanks! yes, you understood right and that is very helpful.

I have been testing some more and it works fine. The reason must have been that sanoid was still running on the target machine with recursion set. But I was also doing too many things at once and I think I must done stuff that syncoid didn’t like.

2 Likes

One of the more common newbie mistakes is to run sanoid on the replication target with the production template, rather than the backup or hotspare templates. You don’t want to take snapshots locally on a replication target–which is why you use the backup or hotspare templates for the target, so Sanoid knows it should be pruning stale snapshots but should not be taking new ones itself.

Remember, this isn’t as scary as it might sound–Sanoid is cautious, so if inbound replication stops working, Sanoid won’t delete older snapshots. Looking at dailies as an example, let’s say you’ve got 30 dailies on your backup target, then something breaks and you stop getting new dailies replicated in.

When Sanoid runs the next day, it’ll see that the oldest daily is 31 days old–but it won’t delete it, because there are only thirty total daily snaphots available, and in order for a stale snapshot to be pruned it must be both be older than thirty days, and there must be thirty newer snapshots present. So even if you are a very very bad sysadmin and don’t notice for several months that your backups weren’t running, Sanoid won’t ever prune your orphaned backups until replication is fixed, and newer snapshots finally begin rolling in.

Continuing this example, let’s say your internet connection dropped at the remote location, and you didn’t notice for 90 days. When you fix it, you’ll still have 30 dailies on the remote–but they’ll be 60-90 days old, instead of 0-30 days. Once you have it fixed, let’s say a single daily snapshot replicates in before the connection goes down again. Sanoid will now see 31 dailies available, so it will prune the oldest of the stale dailies, but leave the other 29 as well as the new one.