I’ll admit it. I have painted myself into a corner.
I’m currently using a Hetzner “Auction server” with 2 4T disks hanging off of it as my backup destination, and it works great.
But I expect that my source data sets will grow through the years. It’s the whole reason I switched to a “real” NAS because my 4T NAS was constantly filling up.
So my Hetzner server has ~500GB free, and I am on borrowed time.
With my old Synology, their “hyperbackup” tool would compress the source data down dramatically on the destination. the ~ 4T if memory serves on the source compressed down to less than 1T in Backblaze B2.
Are there any magic levers I can pull to achieve the same thing with ZFS? I already set the compression on the backup zpool to zstd.
root@rescue /backuppool # zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
backuppool 3.62T 3.07T 569G - - 6% 84% 1.00x ONLINE -
mirror-0 3.62T 3.07T 569G - - 6% 84.7% - ONLINE
sda 3.64T - - - - - - - ONLINE
sdb 3.64T - - - - - - - ONLINE
root@rescue /backuppool #