Remote ZFS backup destination on family's main PC

Looking for validation as to if this is a good or bad idea.

I need to upgrade my in-laws’ Windows 10 PC to Windows 11, could I use the new PC as a ZFS target? They’ll need new hardware so could I build something that gives them windows 11, and I run a zfs compatible OS on the windows 11 pro hypervisor. I’ve tested this theory locally and it works. I’d plan to build it with a couple of HDDs which would be passed through to the ZFS OS.

The PC at my in-laws will be switched on once per day for 2-3 hours. Slightly concerned about the drives being powered on and off daily but there’s already a drive in their current PC that’s been doing that for ten years and it’s been fine. Great sample size, I know!

I visit my in-laws about twice a year as they live in the UK and I’m in the US so maintenance would be tricky.

I’m looking for opinions on whether this is a good or bad idea. How can I ensure syncoid runs during the 2-3 hour window? If I was willing to reduce the storage size, would a DC600 ssd cope better with the daily power cycles?

It would be impractical to have a separate always on device.

The only challenge I foresee with this setup is I’m not at all well versed on doing PCI passthrough on Windows Hyper-V.

You want the virtualized OS to have full, bare-metal access to the disks, or even better, the disk controller. You don’t want to be using virtual disks for ZFS if it can be avoided.

I forgot to say that I was able to pass a hard drive through after marking it offline in windows disk management.

That’s good, but it will make disk replacement slightly more complicated unless you pass through the entire controller.

Thinking about this - what are they using this PC for?

I’m wondering if it makes more sense for the host to be a KVM host, and then just install a desktop environment + the SPICE client, with Windows as the virtual machine.

That way they get their familiar Windows environment, but you get a more robust system for the host, and zfs snapshots for the entire virtual machine.

Also, unless they really need more than a terabyte or two I’d probably go with SSDs just for the additional responsiveness.

I do have to wonder about making things too complicated. If its a really small amount or it doesn’t change very much, maybe look into the free Veeam for Windows agent. You can just point it at an SMB share. Though admittedly that’s not ideal if it’s going over the WAN.

Hi @aidan, I’ve done a similar solution at my old parents pc. Windows 10 on ssd and Ubuntu on hyper-v with 2 14TB drive directly attached to the vm. I manage the remote power on with a pikvm. That way I have a scheduled playbook at my home that turn the pc on, do the backup and then switch the pc at last state, just to not power off if it was in use.
It’s not the most reliable thing but for my use is sufficient. I also backup on S3 with restic my most important data to have an additional restore point to reduce risks.

Probably, by Christmas I will upgrade that pc, my mum will be super excited. :clown_face:

Glad to see somebody has done this before. I had not thought of controlling the power remotely, that’s genius!
Why do you say it’s not the most reliable? Have you experienced any issues or does it just feel a bit hacky?

I like the suggestion but as I am also their IT support, I want to keep the solution simple and most of all reliable for them.
I wasn’t planning on using a controller beyond what’s built into the motherboard. What issues do you foresee replacing disks when I’m just passing through the disks? I’ve never passed disks through before but this is from my test system

zpool status -v
  pool: dozer
 state: ONLINE
  scan: scrub repaired 0B in 09:08:26 with 0 errors on Sat Apr 19 14:17:07 2025
config:

	NAME                      STATE     READ WRITE CKSUM
	dozer                     ONLINE       0     0     0
	  wwn-0x5000cc2a52dda13f  ONLINE       0     0     0

it looks the same as my main ZFS host apart from the different id.

Not a problem, just adding additional steps into the disk replacement procedure.

1 Like

It not reliable because the pc can be powered off while the backup is running (windows user don’t know what is happening under the hood) and a lot of things can go bad on a daily bases :sweat_smile:.

Internet/power/hw/many other things fails!

Since I have local, remote and cloud backups I feel ok if time to time I have problems :grin:.

This is very true, but it’s also what makes a ZFS target especially desirable. You cannot make a ZFS target inconsistent just by power-cycling it, all you can do is interrupt the backup run–and thanks to resumable replication, it’ll just pick back up again from where it left off, the next time it’s available.

Honestly, though, a separate always-on device might really be a better idea. It can be very small and very low-power; have a look at the $139 Odroid H4+. You get a quad-core Alder Lake x86 CPU, 4 proper SATA ports, and extremely low power draw (they recommend a 60W power supply, or 133W if you’re using full-size 3.5" drives. CPU is TDP 12W, and the entire system should idle at 5W or less, depending mostly on how many drives you stuff in).

Note that the $139 I mentioned gets you the processor, board, and heatsink. You’ll need to add RAM, whatever drives you want, and a PSU and case. If you want to go with a pair of 2.5" drives, Odroid’s Gamecube-style case would work well: https://www.hardkernel.com/shop/h4-cube-case/

Their 60W power supply, with a UK plug (I think that’s where you said this would be going?) is $10: https://www.hardkernel.com/shop/15v-4a-power-supply-uk-plug/ The 130W one is $25, but I’m not sure if they’re currently offering it with UK plugs.

If you’re just doing a pair of 2.5" SSDs, though, I think you’d be perfectly fine with the 60W supply.

would a DC600 ssd cope better with the daily power cycles?

Your concern here isn’t really daily power cycles so much as write endurance. But if you’re doing a dedicated tiny backup appliance, it’s not going to take enough writes to worry about, particularly if you avoid installing a GUI and just run pure CLI–the host OS won’t be writing a bunch of ephemeral garbage all the time, all you’ll have is your one backup run per day, and with that being presumably ZFS based… like I said, unlikely to put a big dent in the endurance.

A DC600M is rated for 1.0 DWPD (full capacity of the drive written per day over five years). Prosumer SSDs are usually rated at about 0.6 DWPD, and cheap consumer crap is usually 0.2-0.3 DWPD. The “cheaper” the drive you’re buying, the more you need to go overboard in total capacity to keep it from turning into garbage faster than you want it to.

Also be aware: those ratings are about how confident the manufacturer is that the drive will be USABLE after five years of that many drive writes per day… they do not and should not be intended to guarantee OEM performance after that many writes. IME, most drives are down to about 1/2 to 2/3 their original performance by the time they hit half their rated write endurance. Many of the consumer drives are damn near unusable by the time they hit half of their rated endurance, again IMO, IME.

Your two opposing factors here are 1. it’s just a backup target, so much MUCH fewer writes than the source it’s backing up will experience while CREATING that data to be backed up. But, 2. it’s gonna be on the other side of a rather large pond. Choose wisely! :slight_smile:

1 Like