I have two Proxmox PVE servers, each of which has a NVME for Proxmox and VMs/LXCs which backup to a PBS server, and a 16TB USB HDD for data which is LUKS encrypted with ZFS on top. The zpool on the machine that will be at my house (server1) is named z16TB-DM, and the one that will be located at my Dad’s house (server2) is named z16TB-AM.
I have datasets for three users (del, user2, user3), and for media, software and NVR, and I’ve done a snapshot for each dataset and done zfs send/receive with each one to copy it to the other drive, so the contents are identical at present.
Once the servers are deployed, I’ll be backing up my PC’s data to my dataset (del), and my Dad and Mum will do likewise with theirs (user2 and user3).
What I want to do is have my dataset (del) mirror to my Dad’s server every night, and have the user2 and user3 datasets mirror from his server to mine, so we have an offsite copy of each other’s backed up PC data.
For the media and software datasets it’s a bit more complicated, because new files are likely to be added on either server, so I guess that needs to do an “update with new files only” both ways, otherwise if a file is added to server 1 and server 2 initiated a sync job first, it would delete the file from server 1.
The servers will be connected via a secure Tailscale connection, so I won’t need to use any extra encryption in transit.
I’ve created a user “syncoid” and added it to the sudo group; added “syncoid ALL=NOPASSWD: </usr/sbin/zfs>” to /etc/sudoers; generated a key with ssh-keygen and copied it to both servers under /home/syncoid/.ssh/ and copied the pubkey into authorized keys, and tested that I can connect either way using ssh (using the LAN addresses 10.10.18.198 and 10.10.55.198).
I’ve configured sanoid on my server and that’s creating hourly, daily and monthly snapshots automatically.
syncoid isn’t working at the moment though. One of the errors it gave was about sudo not being found, which didn’t surprise me as sudo isn’t installed in PVE, so I installed it but it still gives the same error.
syncoid z16TB-DM/del syncoid@10.10.55.198:z16TB-AM/del --sshport=2325 -sshkey=/home/syncoid/.ssh/id_rsa
WARN: ZFS resume feature not available on target machine - sync will continue without resume support.
INFO: Sending oldest full snapshot z16TB-DM/del@copy (~ 624.0 GB) to new target filesystem:
bash: line 1: sudo: command not found
mbuffer: error: outputThread: error writing to at offset 0x110000: Broken pipe
mbuffer: warning: error during output to : Broken pipe
2.27MiB 0:00:00 [3.09MiB/s] [> ] 0%
CRITICAL ERROR: zfs send ‘z16TB-DM/del’@‘copy’ | pv -p -t -e -r -b -s 670047542304 | lzop | mbuffer -q -s 128k -m 16M 2>/dev/null | ssh -p 2325 -i /home/syncoid/.ssh/id_rsa -S /tmp/syncoid-syncoid@10.10.55.198-1739064523 syncoid@10.10.55.198 ’ mbuffer -q -s 128k -m 16M 2>/dev/null | lzop -dfc | sudo zfs receive -F ‘"’“‘z16TB-AM/del’”‘"’’ failed: 32512 at /usr/sbin/syncoid line 492.
I’m not even sure if syncoid is the best thing for what I need to do, and as I’m using Tailscale, ssh is probably an unnecessary overhead, but if the tools that I need to use rely on it, it’s probably not worth worrying about.
I’m also not sure what I need to do about permissions for the datasets/folders. At the moment each users dataset is owned by user1:user1, etc. and the media folder is owned by media:media, but maybe for syncoid and the like to work I need to create a “syncoid” group and recursively set the group permissions for each dataset to that and group, with rw group permissions?