Hi there,
I am currently replicating a dataset from one host (storage-vm) to another host (live-vm) with syncoid on an hourly timer. The command that my systemd service uses is /usr/sbin/syncoid --recursive tank/dataset root@live-vm:tank/target and the snapshots it’s transferring come from sanoid, which snapshots the dataset hourly.
The retention policy on storage-vm is to keep 24 hourlies, 7 dailies, 3 monthlies. The retention policy on live-vm is only 24 hourlies. All of this works fine. Only _hourly snapshots make it to live-vm and they are pruned after 24h. So far, so good.
Now, what I want is to NOT transfer the state of the LIVE dataset between the two. What I explicitly want is: When I delete files and folders inside “storage-vm:tank/dataset” with rm that these files and folder should still be present on the live dataset “live-vm:tank/target” until the last snapshot that contained them is pruned, i.e. 24h after I deleted the files/folders on storage-vm.
Is something like this even possibly with ZFS/syncoid? ChatGPT gaslit me by saying that this were the standard behaviour of syncoid and that it “never transfers the state of the live dataset, only snapshots” and that my “files will only stop showing up in the live dataset on live-vm when the last snapshot that contains them is pruned there”. Both, of course, were not true.
After I told it that my files & folders got deleted on live-vm exactly 1h after I deleted them on storage-vm, i.e. after the very next snapshot replication and not 24h after, it advised to use the option “–no-sync-snap” but after reading the description of “–no-sync-snap” I fail to see how this should help.
Is there actually an option with ZFS/syncoid to delay deletion of files & folders between two synced hosts by whatever time period you set in the retention policies?
Thanks in advance!