ZFS on USB drives (I know it's a bad idea, but how bad is it really?)

I’ve been “playing” with the system I described above. I wanted to replace the 6TB HDDs with 8TB HDDs to increase space. My first try was to attach the first 8TB HDD to the mirror and remove one of the 6TB HDDs when the resilver/scrub completed. With the third USB HDD connected (on a separate USB/SATA adapter connected to the other Pi 4B USB3 port) the system started falling apart. All three drives started experiencing problems and were frequently resetting. There were scary messages in dmesg output and system hangs. The change in behavior was dramatic. I resolved the situation by disconnecting the extra USB/SATA adapter and replacing a 6TB HDD with an 8TB HDD and the resilver/scrub completed without further incident.

In the mean time I tried the USB/SATA adapter with the 2nd 8TB HDD on an X86_64 host and ran into problems with that. It turns out that when the description for the USB/SATA adapter said drives up to 6TB, they really meant it. :roll_eyes:

I bought a new adapter (UGREEN) and on the X86_64 host it worked flawlessly to mirror the 8TB HDD to the 6TB HDD. I plan to try it with the Pi 4B server and will report back with the results.

I really prefer not to use USB for storage, but if not pushed over the edge it has been pretty solid for me.

1 Like

I’ve had issues backing up to a USB drive using ZFS send/receive.

I should say I’ve both had and not had issues :smiley:

I used an externally-powered, 3.5’’ HDD for years as a backup drive using ZFS receive, without a hitch. This was on a desktop, and it was in the pre-SMR days, so safe to assume CMR. It was USB 2.

I tried to set up a similar system off a laptop and failed miserably. Every few days to few weeks it would refuse to import the pool and there was nothing I could do. I never spent much time troubleshooting, and instead started pulling backups over the network, but I had been doing just about everything wrong: USB drive was possibly SMR, and it was plugged onto a docking station.

If you don’t mind sharing, what USB-SATA adapter were you using initially?

what USB-SATA adapter were you using initially?

One that I tried was described on Amazon as

SATA to USB 3.0 Adapter Cable for 2.5 3.5 inch HDD/SSD, Hard Drive Adapter Converter Support UASP Includes 12V/2A Power Adapter, Black

And sold by EYOOLD.

The UGREEN worked well with the Pi 4B and I have completed the resilver/scrub required to expand the pool to 2x 8TB HDDs. That uses a Wavlink dock with 2x drive bays, 2x charging ports (one of which powers the Pi 4B), 2x USB3 ports and 2x SD card slots. During the process where I had three drives attached, the first drive in the Wavlink dock was reset 17 times and that does not happen in normal 2 drive operation.

Does this answer your question?

1 Like

Yes, thank you! I appreciate the thorough answer. I was curious because of the unusual errors. I suppose it makes sense given the reported TB limit.

1 Like

One reason why one might choose an external USB enclosure for your pile of disks (use case) is the following:

Let’s say I’m scrounging through my garage and find a bunch of 1 and 2TB disks that were good when I last used them 10 years ago, let’s say a bunch of WD Green disks from 2011 or something. (Yeah, I didn’t realize I had them, but hey… storage! Yay!) I do realize the risk in having anything on them given their age and who knows when they will fail, but hey… let’s make a nice ZFS pool out of them!

Now, I don’t want to spend a boatload of money, but how do I use them? USB enclosures are going to be cheap. What do I do? What would be the best approach to using them without breaking the bank? Is there a way to do this that isn’t crazy expensive using actual SATA cards?

I’d love to get them on a real SATA card (or multiple), but I’d have to stick them somewhere. I suppose I could have them hanging out all janky-like, stacked in some way with some Legos as spacing for heat between them. (Hmm… a Lego hard drive enclosure for JBOD… I bet somebody has done that!) But you see where I am going with this.

Edit: Well cool… somebody did it with Legos after all Building a 12 HDD Raid Enclosure with LEGO - TR Forums

“What if I found a pile of already-opened MREs from the Vietnam War in my garage? How can I most safely eat them? Since they’re already unsafe themselves, why not slather 'em in a little dumpster gravy?”

I’m not trying to be mean here, but jfc…!

Sorry Jim! Looks like I may have triggered you :wink: Really though, I’m not planning on using them in some mission critical way. I could use the extra storage I found for Plex, DVR storage, or movies that I have backed up elsewhere. I did test them, sure they said they were old but there were no errors being thrown up and they sound ok (not scientific I know).

1 Like

I was already jittery at “USB enclosure” and then you just had to go and bring up WD Greens… twitchtwitch

I still really, really don’t advise doing that. The best thing to do with 1-2TB WD Green disks, whether they still work or not, is just to e-cycle them. BRAND NEW usb hard drives are about $60 apiece. Refurbished 8TB non-USB enterprise drives are also around $60 apiece.

This, in turn, means an entire stack of eight 1TB WD Green drives is worth significantly less than $60–it provides less space, worse performance, more points of failure, and FAR worse mean time BETWEEN failure than that single refurbed HGST 8TB, with roughly an order of magnitude higher heat generation and energy consumption.

So I mean, yeah, on the one hand you could say I’m triggered, but on the other hand you could say I just really want people to realize what their choices mean and what other choices are available. Building a stack of 1TB WD Greens into any kind of multi-drive array in 2024 isn’t much different than building five 1.44MiB floppy drives into a RAID5 array was for me in 2004–a hilarious way to play around with garbage, but not a thing that should be considered “not-garbage” in any way beyond artistic, once complete. :slight_smile:

Ok. You’ve convinced me! I’ll run 7 pass shred on them and get them recycled.

I just hate seeing working stuff tossed for no good reason, but you’ve convinced me there IS good reason.

Now what to do with that unopened adaptec SCSI 2 controller from ‘05…I’m sure I could use that… :wink:

1 Like

Example ZFS on USB rig that survived any load I threw at it

ZFS on USB have worked great for me after I eliminated all apparent issues with the USB path. That’s included testing different enclosures based on different chipsets, hubs, and host controllers. The above is a pic of one of those iterations. Here’s some takeaways:

USB host side

  • Intel host controllers are generally problem-free
  • AMD on-CPU host controllers are not usable for this purpose
  • AMD on-chipset host controllers are generally problem-free
  • AMD ATX boards typically expose both and their manual states which are which
  • AMD-based mini PCs might expose only on-CPU ports and they wouldn’t tell you

Cables

  • Get decent cables and test them

USB device side (Oh boy)

  • Ensure your enclosures are using known good chipsets
    • This is generally true for WD external drives
    • NST-370A31-BK is the same as S351BU313, both use ASM235CM and are problem-free under heavy sustained load
    • Firmware can make a chipset behave well in one enclosure while being problematic in another. E.g. JMicron is generally unreliable in noname enclosures while perfectly stable in WD MyBook
  • Ensure your enclosures keep them chipsets cool
    • This is NOT true for WD external drives. WD uses both JMicron and ASMedia and I’ve had both overheat under enough sustained load
    • You can fix existing enclosures by sticking small heatsinks to the chipsets. The ghetto server above has all 4 drives with heatsinked USB controllers, hole in the enclosure above the heatsink for air circulation. Here’s what that looks like on WD MyBook and WD Elements:

Shortcut

You can eliminate all variables but the host side by getting a USB box that someone verified sane. Here’s my contribution. The OWC Mercury Elite Pro Quad is a pretty sane design from what I can tell. It’s got good reputable USB-to-SATA chipsets, they’re cooled appropriately, they’re linked to a reputable USB hub chip and the thing comes with good cables. I have them hooked to 5Gbps USB ports and I’m observing 400-500MB/s from each box but they should be able to do double that on 10Gbps ports. I now have 4 in operation and not a single one has squeaked so far. You can read about the teardown and testing I’ve done in the thread I linked. They’re not cheap but they’re not stupidly expensive either - about $55 per bay, which in the ballpark of the decent single enclosures I mentioned.

A friend of mine has been inside a TERRAMASTER D6-320 too and I’ve seen the pics. It uses the same chips both for the bridges and the hub, although I think it’s got a second hub because it needs 6 ports. I haven’t tested it myself so I can’t vouch for it but if it’s the only thing you can get your hands on, it looks sane. It’s made in PRC while the OWC is Taiwanese if you care about that.

1 Like

This is WILD, and I love it. This is making me rethink how I’ve always setup my homelab. I tend to ‘hyper-converge’ to keep costs manageable but this would allow some interesting separation between the storage and compute layers.

Thanks for the info!

1 Like