I use sanoid on “nas1” and syncoid to replicate, irregularly, to “nas2”. Typically this is once per week when I power on nas2. I’m asking for some help to understand why I have hourly snapshots/bookmarks on nas1, with what I believe to be an associated error during replication: “could not find any snapshots to destroy; check snapshot names”. This doesn’t affect all datasets, just those for which I do not have hourly snapshots configured,
From my sanoid.conf (these options have been unchanged for months):
[tank/archive]
weekly = 4
monthly = 6
yearly = 1
autosnap = yes
autoprune = yes
Then, my syncoid cli options:
/usr/sbin/syncoid --delete-target-snapshots --no-privilege-elevation \
--no-sync-snap --create-bookmark --sshkey /home/name/.ssh/syncoid tank/archive name@nas2:tank/archive
Despite these settings I still have hourly snapshots for this dataset on nas1:
<snip>
tank/archive@autosnap_2024-05-25_00:00:09_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_01:00:02_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_02:00:03_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_03:00:05_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_04:00:02_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_05:00:02_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_06:00:01_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_07:00:06_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_08:00:03_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_09:00:00_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_10:00:03_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_11:00:03_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_12:00:02_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_13:00:02_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_14:00:04_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_15:00:03_hourly 0B - 3.07G -
tank/archive@autosnap_2024-05-25_16:00:05_hourly 0B - 3.07G -
<snip>
During replication to nas2, I receive these errors:
NEWEST SNAPSHOT: autosnap_2024-05-25_15:00:03_hourly
Sending incremental tank/archive@autosnap_2024-05-18_00:00:08_daily ... autosnap_2024-05-25_15:00:03_hourly (~ 34 KB):
35.0KiB 0:00:00 [ 117KiB/s] [==============================================================================================================================================] 102%
ssh -i /home/name/.ssh/syncoid -S /tmp/syncoid-name@nas2-1716650406-3134 name@nas2 ' zfs destroy '"'"'tank/archive'"'"'@autosnap_2024-05-18_10:00:04_hourly,autosnap_2024-05-18_11:00:07_
hourly,autosnap_2024-05-18_12:00:00_hourly,autosnap_2024-05-18_13:00:02_hourly,autosnap_2024-05-18_14:00:03_hourly,autosnap_2024-05-18_15:00:01_hourly' failed: could not find any snapshots to destroy; check snapshot names.
On nas2, the excerpt of snapshots for the same period of time looks like this:
tank/archive@autosnap_2024-05-25_00:00:09_daily 0B - 3.08G -
tank/archive@autosnap_2024-05-25_00:00:09_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_01:00:02_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_02:00:03_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_03:00:05_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_04:00:02_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_05:00:02_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_06:00:01_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_07:00:06_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_08:00:03_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_09:00:00_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_10:00:03_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_11:00:03_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_12:00:02_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_13:00:02_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_14:00:04_hourly 0B - 3.08G -
tank/archive@autosnap_2024-05-25_15:00:03_hourly 0B - 3.08G -
As I say, with no hourlies configured for sanoid, I am failing to understand how I am setting the conditions for this issue and the error to occur. Thank you in advance for any time you’re willing to spend helping me with this