Ahhh you are right. I had read previous threads in this forum such as this one: Syncoid, multiple hosts and --no-sync-snap
I had assumed since I was managing all my syncing from one host that I was avoiding such an issue.
It turns out you are right as usual.
On Home (cleteServer), I have hundreds of “tank/backup/zfs/local/rpool/images@syncoid_rpool-to-sc_cleteServer_xx” snapshots. On Remote (what I call SC in the identifier), I have only a single “images@syncoid_rpool-to-sc_cleteServer_xx” snapshot.
I also am getting confused by the --identifier
s that I used. I think I may destroy every backup and just make an ID like <source location>-<source pool name or dataset>-to-<destination location>-<destination pool name>
which would end up with things like ga-rpool-to-ga-tank
and ga-rpool-to-sc-rpool
or ga-tank-photos-to-sc-tank
. Right now seeing all this photos-to-sc
and rpool-to-tank
stuff is confusing me.
What if I do this:
- Fix identifiers to be more clear
- Destroy everything (I have cloud backups of all important data in addition to this cross-backup scheme)
- Use
--no-sync-snap
for local-to-local backups - Continue to use sync snaps for remote backups
Based on what you described, I think that would fix my issues.
One doubt remains: What if Sanoid prunes a local snap that was used for local-to-local replication [say it chose a _frequently snap to use]? Then it would complain. I may just have to, as you said, write some script that removes all the “foreign” snaps.