ZFS on USB drives (I know it's a bad idea, but how bad is it really?)

I have a small PC(Beelink Minipc) that operates as a Plex/Jellyfin server. The OS runs on the internal SSD drive, but I have all my media on external USB hard drives. I would like to start playing around with ZFS, and I figured this might be a good place to start. However, from what I’ve read, putting ZFS on USB drives is a bad idea. I just want to know how bad of an idea this is – keep in mind that this is not mission critical, but I would like this to be semi-reliable and not spend all my time troubleshooting issues that are the result of running on USB drives.

This machine runs in a closet, so there’s very little chance of accidental unplugging. The worst that happens is the occasional power failure, though even that is pretty rare.

Is this a good idea, or am I going to go crazy chasing down issues that are related to external HDDs?

1 Like

I am not sure if we are allowed to link to Reddit, but you can see some users report some examples here and here. Definitely workable and scalable to some degree, but you’ll still limit yourself due to the overhead of the USB protocol and the bandwidth limitations. External hard drives can also get very hot (especially those in a plastic case), so it would be a good idea to have a fan, laptop cooling pad, etc. to keep help them cool, particularly during scrubs. Accepting those limitations, I think it is fine.

As you may know, generally RAIDZ vdevs cannot be expanded after the fact (until that feature gets added). This limitation is more noticeable when you are working with a smaller number of drives. Certain operations like these are somewhat more convenient in LVM or mdraid (though of course ZFS offers more features to protect data integrity).

In my experience the problems with USB don’t show up if you just use the USB drive(s) for e.g. a backup pool that gets turned off after the backup is done.
Problems with USB mostly pop up when drives are connected all the time, e.g. in a ‘regular use pool’. I don’t have stats to back this up, but that is the case where I have seen problems with USB drives.

For a few years I have used pool made entirely of USB drives. No issues what-so-ever, even when one drive failed and I had to replace it; all resilvered just fine. I still have one mirror pool in usage with one internal drive + one USB. No issues at all. That said, this is purely anegdotal experience and just because it worked for me, might not work for you.

From my experience, the most important thing is not actually how you connect the drive but if you have monitoring. Seeing early failure will allow you to react before your pool goes offline due to errors.

PS: Do take care to distribute your drives over different USB hubs, if possible, as performance difference is sometime quite noticeable.

It can work, but whether or not it works well for you is going to vary heavily based on your host’s USB controller, the OS, any hubs involved internal or external, and your drives’ USB->SATA (or whatever) adapters internally.

And it can fail in some absolutely wildly counterintuitive ways. I tried plugging in a SATA SSD via adapter and a 2.5in USB drive to a Pi 4 once, to test something, and made a raidz1 pool across two partitions on the SATA SSD and one on the spinning disk, then wrote some stuff and tried scrubbing. No write errors, but I kept getting back checksum errors that looked like some portion of the data had never gotten written.

Eventually, confused, I tried moving the two disks to my desktop PC and repeated the experiment…and got an enormous flood of write errors, but no new checksum errors after the first scrub. So was the USB stack eating the write errors somewhere and then I was finding out from scrubbing later that they didn’t make it to disk? If so, where?

Things like that, and disks taking so long to respond they fall off the bus, or worse, stay there but don’t respond to anything at all, are why people advise against mixing ZFS and USB disks - ZFS is pretty good at handling disks that just say “I’m broken, oops” or cleanly error out, but less so with things that do strange quirky things and are half-broken, and that’s a pretty apt description of most USB devices at the best of times.

My home “server” is an old Thinkpad x260 with 2 WD 5TB Passports for storage. The OS is on the internal SSD. At the moment, those drives have 27138 power-on hours. So far so good. (And yes, I have multiple backups)

I’ve been running mini-pcs with ZFS on USB drives for about 8 or 9 years now (but I was using ZFS before that for several years, so I wasn’t new to the concepts or to configuration). There are a few pitfalls, but in general it is a good (cheap-ish) way to create a reliable, solid filesystem with easy, over-the-net backups (zfs send/receive) built in.

If you’re starting from scratch, I’d recommend that you go for simple mirrors (rather than raidz). Before you even start with real disks, you can easily use files on an existing machine to create different configurations and get comfortable with the ZFS tools (use ‘truncate’ to create files on an existing filesystem and then simply use zpool/zfs to create pools and zfs-filesystems using those files instead of actual disks). Get comfortable with the command set (“zpool list -v” is your best friend and you almost always want to use “attach” and “detach”, -not- “add” and “remove”).

Always use USB-3 ports, not USB-2.

Don’t assume that cables or built-in controllers on new USB external drives are good (cables don’t seem to be tested at all by many manufacturers and the USB<->SATA bridge controllers in external drives have a nasty habit of failing after just a couple of months of operation). Label your external drives, so you know which is which and keep the labels updated. Always leave space for UEFI boot on every physical drive, so that you’ve got multiple alternative boot devices.

Don’t use USB hubs if you can possibly avoid it (although a good USB-3 hub is better than resorting to a USB-2 port).

Drives - Don’t use SMR drives in any ZFS array (they may work for a while in normal operation, but a resilver will probably end up killing the disk and possibly the whole array …SMR is the kiss of death).

Some USB disks just want to go into sleep mode, no matter what your system configuration. As a last resort, create a file somewhere on the offending disk which is overwritten (changed, not just touched) by cron every couple of minutes (make sure that it is excluded from snapshots and backups).

Do run “zpool list -v” regularly and mail yourself the output (FreeBSD’s daily status emails are great for this). The same applies to “dmesg”.

Do find out how to use “apply” to add ordinary user permissions to ZFS filesystems so that you can “push” a filesystem backup from a remote server without having to log in as root (this is send/receive again …use syncoid to make this whole process easier).

Do make frequent back-ups of your ZFS filesystem to another system (as has been said many, many times, RAID is not a back-up). I still use a Z8350 Atom (the cheapest, smallest X86 machine with a USB-3 port) to back-up some ZFS filesystems to a single, external USB disk (your back-up target doesn’t have to be another mirror).

Don’t despair! The learning curve for ZFS is fairly steep, but ZFS on USB drives doesn’t actually add to that.

It depends what you mean by “low power”.

Also why do you suggest “ZFS on root” (I assume you meean the OS booting from a ZFS array) is a problem?

I believe ZFS is mainly used in datacentres, not homelabs (lots of possible metrics but I suspect most would support my assertion).

I run several machines sporting ZFS arrays - all are laptops, so I’m clearly in the ‘homelab’ space, though all are also ‘servers’ in the usual sense.

One has a total of 8 SATA SSDs attached, of which 5 are a Z2 array, 2 are a ZFS mirror and one is the USB-connected boot drive. Before I committed real data to this configuration I did a lot of experimentation; first with ESXi, now it’s Proxmox - way better! The Z2 array is handled by TrueNAS via pass-through.

Apologies for seeming off-topic (none of my systems currently runs ZFS on USB drives) but my relevant points are as follows, from my experience:

(a) USB flash drives (cards, sticks, whatever), even high-end USB3 ones such as my best Lexars, are slow to very very slow in a write-intensive role. Obviously not a problem if the OS simply boots from the drive and loads everything into RAM and then ignores the boot drive.

Neither ESXi not Proxmox does that, but it isn’t critical and isn’t noticeable most of the time. At first.

(b) Almost all of the posts in this thread do not make a distinction between USB flash drives and USB-connected SSD/HD drives (though the OP mentioned USB-connected hard drives), so it is quite possible that people are talking at cross-purposes and completely misunderstanding each other.

(c) USB flash drives wear out really quickly (to the point of crashing/not booting - believe me) when used as the boot drive for a hypervisor! Not ideal, and I’d strongly recommend against it. Even an ancient SSD in a cheapo USB caddy is a vastly superior alternative.

If your USB-connected ZFS array is flash-based (as opposed to SATA SSDs) and write-intensive (resilvering is obviously write-intensive, and even scrubbing can be), then that’s your problem - USB flash drives are unreliable in write-intensive applications. Just ask TrueNAS…

I have a host that I would characterize as a test system and which is running ZFS using USB connected drives. It consists of

  • Pi 4B/4GB RAM
  • Wavlink drive sled w/ extra charging ports (and which powers the Pi) and which supports UAS.
  • 2x 6TB enterprise (7200RPM) HDDs in ZFS mirror
  • crude shroud with cooling fans to draw air past the drives. (It works…)
  • MicroSD card to boot Debian Bookworm (not RpiOS.) I’ve moved “busy” portions of the filesystem like /var to the ZFS pool to reduce wear and tear on the MicroSD card.

It’s about 35% full and takes over 5 hours to scrub. Storage performance is sluggish compared to hosts with SATA connected HDDs. I also run Gitea in a Docker container and that performs adequately. And I can serve my “notes” (MkDocs based) using Python3 -m http.

The sled has extra USB-3 ports and when I connect an SSD to one, operation is fragile, but with only the two HDDs it’s been pretty solid. Total power usage for these (measured at the wall) is about 25 watts. It goes u a couple watts during heavy usage and scrubs.

I’m a fan of Raspberry Pis but most definitely not a fan of USB connected drives. Nevertheless, these seem to be working well.

I have too a beelink mini pc. My Plan:

  • 3x14TB HDD usb disk con usb 3.3
  • 500gb nvme promox system disk
  • 2TB sata ssd disk (I want use it form ZFS caching)
  • 5TB HDD usb disk (to use for peer to peer backup area)
    Essentially a don’t want use raidz pool for heating problem, noises and electricity consume (no spin down).
    So want configure all disk a N ZFS pool of single disks.
    So want use 2x14tb for multi media, and 5tbx and 1x14Tb for PARITY SNAPRAID (allignment every hour).

Do you think can work (well)?


This is one of the limitations of the ZFS. It’s difficult to find low power hardware for it. The options are very limited, if you want Sata or PCIe external connection. Even internally, you can have 2 drives but that means ZFS on root.