Tearing my hair out with this (which is unfortunate as I don’t have any). Trying to use syncoid to replicate my local snapshots to a remote server but I periodically get the error “cannot receive new filesystem stream: destination has snapshots…”
Log output:
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784041]: NEWEST SNAPSHOT: autosnap_2025-09-03_00:00:02_hourly
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784041]: INFO: Sending oldest full snapshot tank/storage/e-Home_Movies@autosnap_2025-08-27_23:59:05_daily (~ 2.8 GB) to new target filesystem:
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: Locale charset is ANSI_X3.4-1968 (ASCII)
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: Assuming locale environment is lost and charset is UTF-8
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: ATTENTION! Your session is being recorded!
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784082]: mbuffer: warning: HOME environment variable not set - unable to find defaults file
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: Locale charset is ANSI_X3.4-1968 (ASCII)
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: Assuming locale environment is lost and charset is UTF-8
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: ATTENTION! Your session is being recorded!
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: cannot receive new filesystem stream: destination has snapshots (eg. backup2/e-Home_Movies@autosnap_2025-09-01_22:48:05_weekly)
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: must destroy them to overwrite it
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: mbuffer: error: outputThread: error writing to <stdout> at offset 0x40000: Broken pipe
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: mbuffer: warning: error during output to <stdout>: Broken pipe
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784082]: mbuffer: error: outputThread: error writing to <stdout> at offset 0x270000: Broken pipe
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784082]: mbuffer: warning: error during output to <stdout>: Broken pipe
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784081]: lzop: Broken pipe: <stdout>
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784079]: warning: cannot send 'tank/storage/e-Home_Movies@autosnap_2025-08-27_23:59:05_daily': signal received
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784041]: CRITICAL ERROR: zfs send -w 'tank/storage/e-Home_Movies'@'autosnap_2025-08-27_23:59:05_daily' | pv -p -t -e -r -b -s 2968558072 | lzop | mbuffer -q -s 128k -m 16M | ssh -S /tmp/syncoid-backup@100.78.10.18-1756860485-7519 backup@100.78.10.18 ' mbuffer -q -s 128k -m 16M | lzop -dfc | sudo zfs receive -s -F '"'"'backup2/e-Home_Movies'"'"'' failed: 256 at /usr/sbin/syncoid line 549.
My local sanoid.conf is:
[tank/storage/e-backup]
use_template = server
pre_snapshot_script = /root/bin/backup_files
[tank/storage/e-docker]
use_template = server
[tank/storage/e-immich]
use_template = server
[tank/storage/e-Library2]
use_template = server
[tank/storage/e-Home_Movies]
use_template = server
[tank/storage/e-photos]
use_template = server
[tank/storage/e-opencloud-data]
use_template = server
#############################
# templates below this line #
#############################
[template_server]
frequently = 0
hourly = 24
daily = 7
weekly = 4
monthly = 3
yearly = 0
autosnap = yes
autoprune = yes
INFO: Sending oldest full snapshot tank/storage/e-Home_Movies@autosnap_2025-08-27_23:59:05_daily (~ 2.8 GB) to new target filesystem:
...
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: cannot receive new filesystem stream: destination has snapshots (eg. backup2/e-Home_Movies@autosnap_2025-09-01_22:48:05_weekly)
Your problem is that for some reason, syncoid thinks that the target doesn’t exist yet (and therefore is sending a full snapshot, not an incremental). Since the target does in fact exist, the ZFS receive process fails when it tries to target a full replication to an already-existent dataset.
I’m not sure why this is going on; I’ve never seen this particular issue before and I’m a bit puzzled at what paths in the codebase might lead you to this outcome. I have some suspicions about some of this console chatter, though:
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: Locale charset is ANSI_X3.4-1968 (ASCII)
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: Assuming locale environment is lost and charset is UTF-8
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: ATTENTION! Your session is being recorded!
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784082]: mbuffer: warning: HOME environment variable not set - unable to find defaults file
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: Locale charset is ANSI_X3.4-1968 (ASCII)
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: Assuming locale environment is lost and charset is UTF-8
Sep 03 10:48:07 ferryman.nine-hells.net syncoid[784050]: ATTENTION! Your session is being recorded!
It’s not impossible that the locale information getting lost/mangled is somehow resulting in something subtle and weird happening when syncoid tries to parse the list of snapshots. I’m also curious about “ATTENTION! Your session is being recorded!”, do you have any idea about that?
And finally, yet another question:
pre_snapshot_script = /root/bin/backup_files
Can we see the content of your pre snapshot script, please?
$ syncoid --version
/usr/bin/syncoid version 2.2.0
(Getopt::Long::GetOptions version 2.58, Perl version 5.40.3)
Yes, all the datasets I am sending are encrypted but stored without key on the remote.
Yes it exists on the remote. If I destroy the named snapshot on the remote and then kick off syncoid again - the incremental snapshots gets synced like expected.
A detail that may or may not be important - I am syncing (in push fashion) from my local (A) to 2 different remotes, sequentially. At present, syncing to remote B appears to be working but I am having problems with remote C. Although it worked as expected for the past 2 days, but failed again today - but not for all datasets (2 failed out of 7 datasets).
A → B (currently working okay)
A → C (worked for 2 days and failed today)
That needs to fail. You need to clarify why your destination has a snapshot that is newer than what your source has. On 2025-09-01_22:48:05 there must have been a process which created a snapshot on your destination.
As has been pointed out, that log indicates the source snapshot is the “oldest” FULL snapshot - not the newest available.
The question is why is syncoid attempting to send a full snapshot (not incremental) to a “new target filesystem” when the destination dataset already exists and has previous snapshots synced by syncoid.
The logfile says the destination pool is backup2 but your command calls it backup
the option --force-delete should not complain about existing snapshots on the destination: --force-delete Remove target datasets recursively, if there are no matching snapshots/bookmarks (also overwrites conflicting named snapshots)
This is weird. Instead of complaining about an existing snapshot it should just delete it.
Okay, so an update here. I have disabled tlog on my backup servers and am no longer having any issues. Cross fingers that its solved for good. Unsure why or how tlog was interfering with syncoid / zfs send-receive but it appears to be that way.