ZFS on USB drives (I know it's a bad idea, but how bad is it really?)

I have a small PC(Beelink Minipc) that operates as a Plex/Jellyfin server. The OS runs on the internal SSD drive, but I have all my media on external USB hard drives. I would like to start playing around with ZFS, and I figured this might be a good place to start. However, from what I’ve read, putting ZFS on USB drives is a bad idea. I just want to know how bad of an idea this is – keep in mind that this is not mission critical, but I would like this to be semi-reliable and not spend all my time troubleshooting issues that are the result of running on USB drives.

This machine runs in a closet, so there’s very little chance of accidental unplugging. The worst that happens is the occasional power failure, though even that is pretty rare.

Is this a good idea, or am I going to go crazy chasing down issues that are related to external HDDs?

2 Likes

I am not sure if we are allowed to link to Reddit, but you can see some users report some examples here and here. Definitely workable and scalable to some degree, but you’ll still limit yourself due to the overhead of the USB protocol and the bandwidth limitations. External hard drives can also get very hot (especially those in a plastic case), so it would be a good idea to have a fan, laptop cooling pad, etc. to keep help them cool, particularly during scrubs. Accepting those limitations, I think it is fine.

As you may know, generally RAIDZ vdevs cannot be expanded after the fact (until that feature gets added). This limitation is more noticeable when you are working with a smaller number of drives. Certain operations like these are somewhat more convenient in LVM or mdraid (though of course ZFS offers more features to protect data integrity).

In my experience the problems with USB don’t show up if you just use the USB drive(s) for e.g. a backup pool that gets turned off after the backup is done.
Problems with USB mostly pop up when drives are connected all the time, e.g. in a ‘regular use pool’. I don’t have stats to back this up, but that is the case where I have seen problems with USB drives.

For a few years I have used pool made entirely of USB drives. No issues what-so-ever, even when one drive failed and I had to replace it; all resilvered just fine. I still have one mirror pool in usage with one internal drive + one USB. No issues at all. That said, this is purely anegdotal experience and just because it worked for me, might not work for you.

From my experience, the most important thing is not actually how you connect the drive but if you have monitoring. Seeing early failure will allow you to react before your pool goes offline due to errors.

PS: Do take care to distribute your drives over different USB hubs, if possible, as performance difference is sometime quite noticeable.

It can work, but whether or not it works well for you is going to vary heavily based on your host’s USB controller, the OS, any hubs involved internal or external, and your drives’ USB->SATA (or whatever) adapters internally.

And it can fail in some absolutely wildly counterintuitive ways. I tried plugging in a SATA SSD via adapter and a 2.5in USB drive to a Pi 4 once, to test something, and made a raidz1 pool across two partitions on the SATA SSD and one on the spinning disk, then wrote some stuff and tried scrubbing. No write errors, but I kept getting back checksum errors that looked like some portion of the data had never gotten written.

Eventually, confused, I tried moving the two disks to my desktop PC and repeated the experiment…and got an enormous flood of write errors, but no new checksum errors after the first scrub. So was the USB stack eating the write errors somewhere and then I was finding out from scrubbing later that they didn’t make it to disk? If so, where?

Things like that, and disks taking so long to respond they fall off the bus, or worse, stay there but don’t respond to anything at all, are why people advise against mixing ZFS and USB disks - ZFS is pretty good at handling disks that just say “I’m broken, oops” or cleanly error out, but less so with things that do strange quirky things and are half-broken, and that’s a pretty apt description of most USB devices at the best of times.

My home “server” is an old Thinkpad x260 with 2 WD 5TB Passports for storage. The OS is on the internal SSD. At the moment, those drives have 27138 power-on hours. So far so good. (And yes, I have multiple backups)

I’ve been running mini-pcs with ZFS on USB drives for about 8 or 9 years now (but I was using ZFS before that for several years, so I wasn’t new to the concepts or to configuration). There are a few pitfalls, but in general it is a good (cheap-ish) way to create a reliable, solid filesystem with easy, over-the-net backups (zfs send/receive) built in.

If you’re starting from scratch, I’d recommend that you go for simple mirrors (rather than raidz). Before you even start with real disks, you can easily use files on an existing machine to create different configurations and get comfortable with the ZFS tools (use ‘truncate’ to create files on an existing filesystem and then simply use zpool/zfs to create pools and zfs-filesystems using those files instead of actual disks). Get comfortable with the command set (“zpool list -v” is your best friend and you almost always want to use “attach” and “detach”, -not- “add” and “remove”).

Always use USB-3 ports, not USB-2.

Don’t assume that cables or built-in controllers on new USB external drives are good (cables don’t seem to be tested at all by many manufacturers and the USB<->SATA bridge controllers in external drives have a nasty habit of failing after just a couple of months of operation). Label your external drives, so you know which is which and keep the labels updated. Always leave space for UEFI boot on every physical drive, so that you’ve got multiple alternative boot devices.

Don’t use USB hubs if you can possibly avoid it (although a good USB-3 hub is better than resorting to a USB-2 port).

Drives - Don’t use SMR drives in any ZFS array (they may work for a while in normal operation, but a resilver will probably end up killing the disk and possibly the whole array …SMR is the kiss of death).

Some USB disks just want to go into sleep mode, no matter what your system configuration. As a last resort, create a file somewhere on the offending disk which is overwritten (changed, not just touched) by cron every couple of minutes (make sure that it is excluded from snapshots and backups).

Do run “zpool list -v” regularly and mail yourself the output (FreeBSD’s daily status emails are great for this). The same applies to “dmesg”.

Do find out how to use “apply” to add ordinary user permissions to ZFS filesystems so that you can “push” a filesystem backup from a remote server without having to log in as root (this is send/receive again …use syncoid to make this whole process easier).

Do make frequent back-ups of your ZFS filesystem to another system (as has been said many, many times, RAID is not a back-up). I still use a Z8350 Atom (the cheapest, smallest X86 machine with a USB-3 port) to back-up some ZFS filesystems to a single, external USB disk (your back-up target doesn’t have to be another mirror).

Don’t despair! The learning curve for ZFS is fairly steep, but ZFS on USB drives doesn’t actually add to that.

This is one of the limitations of the ZFS. It’s difficult to find low power hardware for it. The options are very limited, if you want Sata or PCIe external connection. Even internally, you can have 2 drives but that means ZFS on root.

It depends what you mean by “low power”.

Also why do you suggest “ZFS on root” (I assume you meean the OS booting from a ZFS array) is a problem?

I believe ZFS is mainly used in datacentres, not homelabs (lots of possible metrics but I suspect most would support my assertion).

I run several machines sporting ZFS arrays - all are laptops, so I’m clearly in the ‘homelab’ space, though all are also ‘servers’ in the usual sense.

One has a total of 8 SATA SSDs attached, of which 5 are a Z2 array, 2 are a ZFS mirror and one is the USB-connected boot drive. Before I committed real data to this configuration I did a lot of experimentation; first with ESXi, now it’s Proxmox - way better! The Z2 array is handled by TrueNAS via pass-through.

Apologies for seeming off-topic (none of my systems currently runs ZFS on USB drives) but my relevant points are as follows, from my experience:

(a) USB flash drives (cards, sticks, whatever), even high-end USB3 ones such as my best Lexars, are slow to very very slow in a write-intensive role. Obviously not a problem if the OS simply boots from the drive and loads everything into RAM and then ignores the boot drive.

Neither ESXi not Proxmox does that, but it isn’t critical and isn’t noticeable most of the time. At first.

(b) Almost all of the posts in this thread do not make a distinction between USB flash drives and USB-connected SSD/HD drives (though the OP mentioned USB-connected hard drives), so it is quite possible that people are talking at cross-purposes and completely misunderstanding each other.

(c) USB flash drives wear out really quickly (to the point of crashing/not booting - believe me) when used as the boot drive for a hypervisor! Not ideal, and I’d strongly recommend against it. Even an ancient SSD in a cheapo USB caddy is a vastly superior alternative.

If your USB-connected ZFS array is flash-based (as opposed to SATA SSDs) and write-intensive (resilvering is obviously write-intensive, and even scrubbing can be), then that’s your problem - USB flash drives are unreliable in write-intensive applications. Just ask TrueNAS…

I have too a beelink mini pc. My Plan:

  • 3x14TB HDD usb disk con usb 3.3
  • 500gb nvme promox system disk
  • 2TB sata ssd disk (I want use it form ZFS caching)
  • 5TB HDD usb disk (to use for peer to peer backup area)
    Essentially a don’t want use raidz pool for heating problem, noises and electricity consume (no spin down).
    So want configure all disk a N ZFS pool of single disks.
    So want use 2x14tb for multi media, and 5tbx and 1x14Tb for PARITY SNAPRAID (allignment every hour).

Do you think can work (well)?

Thanks
Olindo

I have a host that I would characterize as a test system and which is running ZFS using USB connected drives. It consists of

  • Pi 4B/4GB RAM
  • Wavlink drive sled w/ extra charging ports (and which powers the Pi) and which supports UAS.
  • 2x 6TB enterprise (7200RPM) HDDs in ZFS mirror
  • crude shroud with cooling fans to draw air past the drives. (It works…)
  • MicroSD card to boot Debian Bookworm (not RpiOS.) I’ve moved “busy” portions of the filesystem like /var to the ZFS pool to reduce wear and tear on the MicroSD card.

It’s about 35% full and takes over 5 hours to scrub. Storage performance is sluggish compared to hosts with SATA connected HDDs. I also run Gitea in a Docker container and that performs adequately. And I can serve my “notes” (MkDocs based) using Python3 -m http.

The sled has extra USB-3 ports and when I connect an SSD to one, operation is fragile, but with only the two HDDs it’s been pretty solid. Total power usage for these (measured at the wall) is about 25 watts. It goes u a couple watts during heavy usage and scrubs.

I’m a fan of Raspberry Pis but most definitely not a fan of USB connected drives. Nevertheless, these seem to be working well.

So…I guess my dream of taking the basket of USB sticks I don’t use and mashing them together into a big ZFS pool is probably a really bad idea :wink:

1 Like

My first RAID5 array was built from 3.5" floppy drives, so don’t let me discourage you.

Then again, I didn’t try to use my floppy-disk RAID5 in production or for anything where it would inconvenience me in any way if and when it failed. :slight_smile:

2 Likes

So here’s my 2 cents after running zfs via usb for almost 2 years.

Electricity is expensive, so my main file server is powered off, and powered on intermittently to do backups or retrieve less common data from.

For 24/7 service I have a Raspberry Pi 4, with a 1TB Samsung T5 usb ssd, as a single disk usb pool.

I had zero issues for 18 months.

More recently the drive intermittently throws IO errors and zfs suspends the pool. I’ll reboot and scrub and typically it’s fine, sometimes it’s some meta data zfs has two copies of and can recover from.

Given the time in service working fine, I’m guessing this is less to do with USB and more to do with lower endurance NAND in removable storage and it’s coming to end of life. That’s only my speculation, I’ve not done any testing to confirm that.

I made my setup mostly for low power requirements. But I think next time around I’ll get a little mini pc, eg on an Intel n100 platform or similar, ideally with two sata ports and use get rid of usb storage.

3 Likes

Yup, that sounds like a USB install, all right.

You’ll be fine once you ditch the USB. Even dogshit eMMC storage–hell, even SD Cards, for the most part–do well enough for what you’d expect out of them, but USB is an absolute nightmare in the long run.

I was honestly surprised how long it lasted without causing me trouble. It was a convenient stop gap using hardware I already had on hand, but it didn’t play up for so long I never had urgency to replace it.

I suspect that’s because you started out with an extremely high quality drive. Those Samsung drives are head and shoulders above the typical USB drive out there.

They’re still USB, though. =)

Would you mind sharing the SMART-Values for TBW?

The T5 is basically a(n mSATA) 850 EVO inside. The 850 EVO 500 GB is rated for 150 TBW. (Reddit)

Most of the useful things arent testable. But sure

smartctl 7.2 2020-12-30 r5155 [aarch64-linux-5.15.0-1047-raspi] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Samsung based SSDs
Device Model:     Samsung Portable SSD T5
Serial Number:    S46UNS0R800420K
LU WWN Device Id: 5 002538 e00000000
Firmware Version: MVT42P1Q
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Size:      512 bytes logical/physical
Rotation Rate:    Solid State Device
Form Factor:      mSATA
TRIM Command:     Available, deterministic, zeroed
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ATA8-ACS T13/1699-D revision 4c
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Mon Mar 18 09:45:09 2024 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)	Offline data collection activity
					was never started.
					Auto Offline Data Collection: Disabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(    0) seconds.
Offline data collection
capabilities: 			 (0x53) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					No Offline surface scan supported.
					Self-test supported.
					No Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   2) minutes.
Extended self-test routine
recommended polling time: 	 (  85) minutes.

SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  9 Power_On_Hours          0x0032   096   096   000    Old_age   Always       -       15998
 12 Power_Cycle_Count       0x0032   099   099   000    Old_age   Always       -       30
177 Wear_Leveling_Count     0x0013   099   099   000    Pre-fail  Always       -       14
179 Used_Rsvd_Blk_Cnt_Tot   0x0013   100   100   010    Pre-fail  Always       -       0
181 Program_Fail_Cnt_Total  0x0032   100   100   010    Old_age   Always       -       0
182 Erase_Fail_Count_Total  0x0032   100   100   010    Old_age   Always       -       0
183 Runtime_Bad_Block       0x0013   100   100   010    Pre-fail  Always       -       0
187 Uncorrectable_Error_Cnt 0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0032   072   039   000    Old_age   Always       -       28
195 ECC_Error_Rate          0x001a   200   200   000    Old_age   Always       -       0
199 CRC_Error_Count         0x003e   100   100   000    Old_age   Always       -       0
235 POR_Recovery_Count      0x0012   099   099   000    Old_age   Always       -       29
241 Total_LBAs_Written      0x0032   099   099   000    Old_age   Always       -       19366645082

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

Thanks. This translates to 17.5 TBW, meaning nowhere near the max. lifespan of the SSD. Have you tried using another SATA-USB Cable? I had an experience lately where a defect controller caused I/O Errors.

EDIT: Pardon me. Having an actual portable SSD should account for an appropriate Controller embedded.