Syncoid - clean up backup-server

Hi. I have these machines and try to use sanoid and syncoid .

  • prod1 - production server 1 - running sanoid
  • hs1 - hot spare 1 - running sanoid
  • pc12 - a backup-server - running syncoid

This setup works but the backup-server is filled with to many old snapshots. How would you do automatic cleanup of the backup-server?

This is the replication commands on the backup-server.

root@pc12:~# crontab -l
# m h  dom mon dow   command
30 23 * * * /usr/sbin/syncoid -r --source-bwlimit=40M root@hs1:srv zfs/hs1 --compress=gzip
08 22 * * * /usr/sbin/syncoid -r --source-bwlimit=40M root@prod1:srv zfs/prod1 --compress=gzip

This is the volumes on the backup-server.

root@pc12:~# zfs list
NAME                              USED  AVAIL  REFER  MOUNTPOINT
zfs                              4.01T  3.13T   112K  /zfs
zfs/hs1                          2.41T  3.13T   140K  /zfs/hs1
zfs/hs1/prod1                     562G  3.13T  9.29G  /zfs/hs1/prod1
zfs/hs1/prod1/subvol-103-disk-0  14.0G  3.13T  8.92G  /zfs/hs1/prod1/subvol-103-disk-0
zfs/hs1/prod1/subvol-107-disk-0   545M  3.13T   545M  /zfs/hs1/prod1/subvol-107-disk-0
zfs/hs1/prod1/subvol-109-disk-0  57.0G  3.13T  14.7G  /zfs/hs1/prod1/subvol-109-disk-0
zfs/hs1/prod1/subvol-110-disk-0  32.3G  3.13T  2.20G  /zfs/hs1/prod1/subvol-110-disk-0
zfs/hs1/prod1/subvol-112-disk-0  8.64G  3.13T  1.75G  /zfs/hs1/prod1/subvol-112-disk-0
zfs/hs1/prod1/subvol-114-disk-0  1.72G  3.13T  1.72G  /zfs/hs1/prod1/subvol-114-disk-0
zfs/hs1/prod1/subvol-114-disk-1  1.72G  3.13T  1.72G  /zfs/hs1/prod1/subvol-114-disk-1
zfs/hs1/prod1/vm-106-disk-0       436G  3.13T   232G  -
zfs/hs1/subvol-101-disk-0        4.50G  3.13T  4.19G  /zfs/hs1/subvol-101-disk-0
zfs/hs1/subvol-102-disk-0        3.79G  3.13T  3.06G  /zfs/hs1/subvol-102-disk-0
zfs/hs1/subvol-103-disk-0        17.8G  3.13T  10.2G  /zfs/hs1/subvol-103-disk-0
zfs/hs1/subvol-105-disk-0        19.4G  3.13T  18.9G  /zfs/hs1/subvol-105-disk-0
zfs/hs1/subvol-107-disk-1        11.2G  3.13T  2.43G  /zfs/hs1/subvol-107-disk-1
zfs/hs1/subvol-108-disk-0         540M  3.13T   540M  /zfs/hs1/subvol-108-disk-0
zfs/hs1/subvol-109-disk-0        65.8G  3.13T  32.5G  /zfs/hs1/subvol-109-disk-0
zfs/hs1/subvol-110-disk-0        34.0G  3.13T  2.81G  /zfs/hs1/subvol-110-disk-0
zfs/hs1/subvol-112-disk-1        27.0G  3.13T  3.37G  /zfs/hs1/subvol-112-disk-1
zfs/hs1/subvol-115-disk-0        4.71G  3.13T  4.71G  /zfs/hs1/subvol-115-disk-0
zfs/hs1/vm-100-disk-0             592K  3.13T   592K  -
zfs/hs1/vm-100-disk-1             268G  3.13T  40.9G  -
zfs/hs1/vm-100-disk-2             752K  3.13T    64K  -
zfs/hs1/vm-101-disk-0            18.1G  3.13T  15.3G  -
zfs/hs1/vm-106-disk-0             948G  3.13T   236G  -
zfs/hs1/vm-106-disk-0_            481G  3.13T   184G  -
zfs/prod1                         838G  3.13T   152K  /zfs/prod1
zfs/prod1/subvol-101-disk-0      4.50G  3.13T  4.19G  /zfs/prod1/subvol-101-disk-0
zfs/prod1/subvol-102-disk-0      3.79G  3.13T  3.06G  /zfs/prod1/subvol-102-disk-0
zfs/prod1/subvol-103-disk-0      17.4G  3.13T  10.2G  /zfs/prod1/subvol-103-disk-0
zfs/prod1/subvol-104-disk-0      1.09G  3.13T   747M  /zfs/prod1/subvol-104-disk-0
zfs/prod1/subvol-105-disk-0      19.4G  3.13T  18.9G  /zfs/prod1/subvol-105-disk-0
zfs/prod1/subvol-107-disk-1      9.98G  3.13T  2.43G  /zfs/prod1/subvol-107-disk-1
zfs/prod1/subvol-108-disk-0       130G  3.13T  97.7G  /zfs/prod1/subvol-108-disk-0
zfs/prod1/subvol-109-disk-0      34.1G  3.13T  10.7G  /zfs/prod1/subvol-109-disk-0
zfs/prod1/subvol-112-disk-1      26.1G  3.13T  3.41G  /zfs/prod1/subvol-112-disk-1
zfs/prod1/subvol-115-disk-0      4.74G  3.13T  3.23G  /zfs/prod1/subvol-115-disk-0
zfs/prod1/vm-100-disk-0           592K  3.13T   592K  -
zfs/prod1/vm-100-disk-1           278G  3.13T  40.9G  -
zfs/prod1/vm-100-disk-2           752K  3.13T    64K  -
zfs/prod1/vm-106-disk-0           309G  3.13T   292G  -

Here are the last 10 of a gazillion snapshots.

root@pc12:~# zfs list -t snapshot | tail
zfs/prod1/vm-106-disk-0@autosnap_2025-09-30_15:00:01_hourly                6.96M      -   292G  -
zfs/prod1/vm-106-disk-0@autosnap_2025-09-30_16:00:01_hourly                13.0M      -   292G  -
zfs/prod1/vm-106-disk-0@autosnap_2025-09-30_17:00:01_hourly                17.6M      -   292G  -
zfs/prod1/vm-106-disk-0@autosnap_2025-09-30_18:00:01_hourly                13.1M      -   292G  -
zfs/prod1/vm-106-disk-0@autosnap_2025-09-30_19:00:02_hourly                10.7M      -   292G  -
zfs/prod1/vm-106-disk-0@autosnap_2025-09-30_20:00:01_hourly                13.0M      -   292G  -
zfs/prod1/vm-106-disk-0@autosnap_2025-09-30_21:00:02_hourly                11.8M      -   292G  -
zfs/prod1/vm-106-disk-0@__replicate_106-1_1759268104__                     9.87M      -   292G  -
zfs/prod1/vm-106-disk-0@autosnap_2025-09-30_22:00:02_hourly                10.5M      -   292G  -
zfs/prod1/vm-106-disk-0@syncoid_pc12_2025-09-30:22:25:37-GMT00:00             0B      -   292G  -
root@pc12:~#

You would run sanoid on your backup server and turn off autosnap, there’s an example in the sample config:

[...]
[template_backup]
    autoprune = yes
    frequently = 0
    hourly = 30
    daily = 90
    monthly = 12
    yearly = 0

    ### don't take new snapshots - snapshots on backup
    ### datasets are replicated in from source, not
    ### generated locally
    autosnap = no

    ### monitor hourlies and dailies, but don't warn or
    ### crit until they're over 48h old, since replication
    ### is typically daily only
    hourly_warn = 2880
    hourly_crit = 3600
    daily_warn = 48
    daily_crit = 60
[...]

tune the amount of snapshots to keep according to your need and then apply it to your dataset, adjusted to your environment:

[your/backups]
    use_template = backup
    recursive = yes
2 Likes

Also, to manually remove use this (deletes all but last 7 snapshots):

zfs list -t snapshot -o name -S creation | grep ^zfs/prod1/vm-106-disk-0@auto | tail -n +7 | xargs -n 1 echo

[for safety, the above command only lists snapshots it will delete]

To delete after checking output carefully, replace echo with sudo destroy -vr

Yes! Thanks @doddi

The backup-server got a lot of pruning going on now. This is the current config.

[zfs/prod1]
        use_template = backup
        recursive = yes
[zfs/hs1]
        use_template = backup
        recursive = yes

#############################
## templates below this line #
##############################
#
[template_backup]
        frequently = 0
        hourly = 48
        daily = 30
        monthly = 6
        yearly = 1
        autosnap = no
        autoprune = yes

You have a few lines for monitoring there as well. I will read up on that before implementing.

1 Like

You’re welcome!

Sanoid includes some monitoring commands to use with Nagios, namely

--monitor-health
--monitor-capacity
--monitor-snapshots

that are meant to be run by nagios/icinga monitoring systems, there’s more detail on github, if you’re not using it then you don’t have to configure the monitoring in sanoid.conf.

1 Like