I’ve been lurking here for awhile and not quite been able to figure out what I’m doing. As a ZFS neophyte, I’m just trying to wrap my head around using sanoid/syncoid. But, I think I’ve been using too many disparate sources and recommendations and find myself more confused than I started.
I have several servers for both personal and business use and after listening to 2.5 Admins for several years, finally made the leap to ZFS when rebuilding my primary server last year. However, I never got around to really taking advantage of all it’s benefits. I think that last night I was finally able to get sanoid running automatic snapshots on my test server.
The part I have confused myself on is getting syncoid setup to push (or pull??) those to/from my offsite backup.
I’ve found numerous blog posts of people doing it, as well as numerous partially related topics here; but I feel like I’ve walked away more uncertain about the process than anything. I’m comfortable with the Linux CLI, but know very little about ZFS.
Can anyone recommend a good guide for getting syncoid setup?
The basics are as follows, assuming you want to run syncoid on the target pulling from the source (pull backup) as opposed to running it on the source pushing to the target (push backup):
generate an SSH key on the target. for now, just generate one for root–later, when you’re feeling fancy, you can set up ZFS delegation and use an unprivileged user account, but for right now it sounds like you need to see a W as soon as possible.
copy the pubkey from the target to the source. Place that pubkey in the source’s /root/.ssh/authorized_keys file. If you’re utterly unfamiliar with this, this can be as simple as literally just putting the pubkey in /root/.ssh and naming it authorized_keys. Later, if you need more than one key to work, you just put extra keys in the same file after the first one, each on their own line. There is no additional syntax to worry about.
Now, make sure your SSH key sharing worked–you should be able to ssh root@source from your target machine and immediately get a prompt. If not, don’t go any further–you need to get this part right before going any further.
Once you’ve got SSH working using keys, you’re ready to roll: syncoid -r root@source:sourcepool/dsname targetpool/dsname will replicate the dataset sourcepool/dsnameand all children from the source machine to targetpool/dsname (and so forth) on the target system.
Essentially, syncoid makes ZFS replication look and feel like a much faster version of rsync. Note that in addition to the datasets themselves, you get all snapshots which were taken on the source present on the target as well.
ALSO note, you should be running Sanoid on the target as well as the source–but on the target, the module referring to replicated-in datasets should use the backup or hotspare templates, not the production template. This is because you should not attempt to take snapshots locally of datasets you routinely replicate in from a different source; at best, incoming replication wipes out all the snapshots you took locally (it has to, this is part of how replication works) and at worst, you wind up with same-name but different-GUID snapshots that don’t match and can’t be used as replication targets, and you wind up having to figure out how to unsnarl a confusing mess.
If you have further questions, ask: but try to get through the SSH key sharing bits and confirm that you can SSH from target to source successfully first, please.
Yes, I did that in combination with using the GitHub docs. But I think I found some poor quality results and when attempting to use those to better understand the docs, I screwed things up and confused myself slightly.
Apologies for the delay, I was away for the weekend.
Yes, you are correct. I want to run syncoid on the target and pull from the source and yes, ideally I would like to get it backed up sooner than later. While I already have SSH keys setup on both boxes, I’m also running Tailscale on both boxes as well, with Tailscale SSH enabled and setup. Given that my offsite backup is located at a friend’s house who runs Windows and has minimal security skills, that’s why I wanted to use Tailscale and keep things as locked up as possible. I may eventually use encrypted backups, but for now wanted to understand how to set things up and use it correctly before tackling that.
So, is this easier to run as root vs running as a user with sudo privileges? Or is it easiest to run as root until I setup ZFS delegation? At the moment, I have root login disabled on both boxes. I can just re-enable it in my Tailscale ACL, but wanted to make sure first.
Seems like it shouldn’t be too difficult. As I understand what you both have shared, in the simplest terms I need to do the following:
Setup sanoid and syncoid on target box with a backup template
Setup cron job to run syncoid on target box
Setup ZFS delegation
I can confirm that I am able to SSH from target to source both as user and root (I re-enabled it in the aforementioned ACL).
My recommendation? Get this set up and working in some test VMs first, before deploying it with critical data.
It’s incredibly simple to make a pool of sparse files, and you can use that to test doing send-receive with Syncoid, in fact, that’s exactly how I tested things.
for i in {0..3}; do truncate -s 5T /tmp/$i.raw; done
zpool create -o ashift=12 testpool raidz1 /tmp/*.raw
From there I usually copy in some junk data - cping some log files from /var/log is usually my go to.
There, now you have a pool to play with on both sides, without needing actual hardware.
Hmmmm, something didn’t go correctly here. I probably should have done what @bladewdr said in the previous reply.
I assumed (and should have asked) that this was supposed to be run from the target machine. In other words, the remote backup box. Is that correct?
When I first ran syncoid -r root@source:sourcepool/dsname targetpool/dsname (and yes, I ran it with the name of my source and pool names and target names and not the example given), I got the following:
CRITICAL ERROR: Target backups/server exists but has no snapshots matching with my-pool/vm!
Replication to target would require destroying existing
target. Cowardly refusing to destroy your existing target.
NOTE: Target backups/server dataset is < 64MB used - did you mistakenly run
`zfs create backups/server` on the target? ZFS initial
replication must be to a NON EXISTENT DATASET, which will
then be CREATED BY the initial replication process.
So, I deleted the pool I had created and ran syncoid again.
This time, I got the following error:
WARN: ZFS resume feature not available on target machine - sync will continue without resume support.
INFO: Sending oldest full snapshot my-pool/vm@autosnap_2024-06-27_12:57:01_monthly (~ 265.9 GB) to new target filesystem:
cannot open 'backups': dataset does not exist
cannot receive new filesystem stream: unable to restore to destination
64.0KiB 0:00:00 [ 189KiB/s] [> ] 0%
mbuffer: error: outputThread: error writing to <stdout> at offset 0x17000: Broken pipe
mbuffer: warning: error during output to <stdout>: Broken pipe
^CCRITICAL ERROR: ssh -S /tmp/syncoid-root@source-1719867443-5981 root@source ' zfs send '"'"'my-pool/vm'"'"'@'"'"'autosnap_2024-06-27_12:57:01_monthly'"'"' | lzop | mbuffer -q -s 128k -m 16M' | mbuffer -q -s 128k -m 16M | lzop -dfc | pv -p -t -e -r -b -s 285496691432 | sudo zfs receive -F 'backups/server' failed: 2 at /usr/local/sbin/syncoid line 549.
Did you delete the whole pool, or just the dataset named server that you’d created beneath the pool? Because the only thing you were supposed to destroy was the dataset, not the entire pool beneath it.
Yes. The actual pool has to exist, because otherwise there’s no target to replicate to. But when it’s an initial replication, the target dataset cannot exist yet, because it has to have the same actual origin as the source dataset.
Because if you try to do the latter, well… just like the error message tried to tell you, targetpool/stuff already exists, and does not share any common snapshots with root@source:sourcepool/stuff.
@mercenary_sysadmin After your helpful explanation yesterday, I corrected my rookie mistakes and I now have 30 hourlies (is that even a word ) and 2 monthlies pulled from source onto target. Thank you!
Now I just need to learn how to restore from the zfs backup…
Once I figure that out, I may consider reconfiguring my topology as discussed in the other topic. It also occurred to me, that with compression enabled on my current backup box, that disk will be usable with the layout you suggested for longer than my ignorant self had originally imagined.
Full disaster recovery where like the entire pool got destroyed, or your HBA fried all your disks? You stop the cron job or systemd timer that’s running your automatic backup, and then you zfs send in the other direction
Just don’t forget to turn it back on when your production is back up.
You can also cherry pick files out of snapshots if needed, which is what most restore scenarios look like, and usually don’t even require you to touch the backup server. Oh no, Jerry from accounting accidentally overwrote this mission-critical spreadsheet! Just mount the snapshot, copy out the file you need, and done.
So, I have my user privileges updated and that seems to run fine. I can execute zfs and syncoid commands without needing sudo privileges. I followed the directions on GitHub here and that all seems fine (as far as I can tell).
My issue is that my cron job is not running. I’ve checked syncoid and the last time it was run (based on the most recent hourlies) was when I did it manually, several days ago. Following is the cron job for my current user on my backup box.
If you can run the syncoid command manually as that user from the command line, then that’s not the issue. (Can you run the command in your crontab successfully, directly from the shell, as the same user?)
You might want to use --verbose and shell redirection to log its output, if things aren’t working from cron. syncoid --verbose source target 2>&1 >> /var/log/syncoid.log kinda deal. Maybe get fancy and echo a timestamp line in first, also from your cron line.
Okay, so I’m still screwing something up here. I’ve edited the crontab to try and test it every 5 minutes and every minute. And two things are not happening. Nothing is being output to the log file and the command does not seem to be run. If it matters, I’ve tried setting the owner of the log file to both root and as the current user and still nothing is written there.
Another curious thing is when checking systemctl status cron cron attempts to run for the current user and wants to output to an email, but discards the output because I do not have an MTA installed (and don’t care to do so at the moment). It’s odd that it doesn’t output it to the log instead.
Edit: To make sure the log was being captured, I tested running syncoid directly with output sent to the log file. This worked correctly, as expected. It wrote to the log file indicating that the most recent hourly was synced. And running zfs list -rt all confirms this as true.