Hi,
I have a setup with a zpool that gets snapshotted by sanoid regularly, and once a day (at 2 AM) syncoid replicates it over to a zpool in my backup server.
But I recently discovered a few datasets weren’t replicating anymore, due to the “target exists, but has no snapshots matching” critical error.
I realized the reason is I deleted some snapshots (on the source) a month ago, to free up some space, without thinking much… I guess I managed to delete the common syncoid snapshot
So on my backup server, the newest snapshots are:
backup01/data@autosnap_2023-08-05_22:00:05_hourly 0B - 765G -
backup01/data@autosnap_2023-08-05_23:00:02_hourly 0B - 765G -
backup01/data@autosnap_2023-08-06_00:00:08_daily 0B - 765G -
backup01/data@autosnap_2023-08-06_00:00:08_hourly 0B - 765G -
backup01/data@autosnap_2023-08-06_01:00:02_hourly 0B - 765G -
backup01/data@autosnap_2023-08-06_02:00:01_hourly 0B - 765G -
backup01/data@syncoid_backup01_2023-08-06:02:02:57-GMT02:00 0B - 765G -
And on the source zpool, the oldest snapshots are:
thepool/data@syncoid_backup01_2023-08-07:02:08:49-GMT02:00 116K - 776G -
thepool/data@syncoid_backup01_2023-08-08:02:02:41-GMT02:00 84K - 776G -
thepool/data@syncoid_backup01_2023-08-09:02:04:16-GMT02:00 76K - 776G -
thepool/data@syncoid_backup01_2023-08-10:02:13:25-GMT02:00 104K - 794G -
thepool/data@syncoid_backup01_2023-08-11:02:04:40-GMT02:00 124K - 808G -
thepool/data@syncoid_backup01_2023-08-12:02:03:34-GMT02:00 64K - 845G -
thepool/data@syncoid_backup01_2023-08-13:02:02:45-GMT02:00 64K - 845G -
I’ve used ZFS for quite a while, but still haven’t fully wrapped my head around the details of how incremental replication works, so my question is: can I somehow get syncoid to start replicating this dataset again?
From what I’ve read, I’m quite sure the answer is “no”, but I wanted to ask the experts to be sure.
If need be I can delete all snapshots on the source, there’s nothing important there. But I do want to keep the snapshots I currently have on the backup server, at least for a while.
What are my options here?
I guess the “worst case solution” is something like: rename backup01/data to backup01/data-before-i-messed-up and run a fresh replication to backup01/data from the source. It would waste some disk space, but I can live with that.
Thanks for any help/advice!