I’ve advocated for not using USB connected storage, preferring “proper” storage interfaces such as SATA and NVME. (I have distant memories of SCSI, MFM, RLL … but those mostly predate ZFS.) Nevertheless, I do have what I consider an experimental file server running on USB connected HDDs. It’s running Debian (not RpiOS) starting with Bullseye as of 2023-03-20 and upgrading to Bookworm (Testing) about a month later. The Hardware is a Pi 4B. Initially it was a 4GB RAM model but I swapped that for a 4GB Model just because I had one. The USB adapter is a " WAVLINK USB 3.0 to SATA Dual Bay External Hard Drive Docking Station" with SD card reader, USB ports and USB charging ports. I like it because the charging port provides sufficient power for the Pi 4B and the other charging port powers a couple 50mm fans that cool the drives. This drive bay identifies as
Bus 002 Device 004: ID 152d:0583 JMicron Technology Corp. / JMicron USA Technology Corp. JMS583Gen 2 to PCIe Gen3x2 Bridge
And I populated it initially with 2x 6TB 7200rpm HDDs configured as ZFS mirrors. Overall, it’s been pretty solid. I back up a fair amount of my local stuff to it including several other Pis that use ZFS. After a year of operation I migrated to 2x 8TB 7200rpm HDDs to get more space. At present it’s at 71% of capacity. It’s been pretty solid except …
One day I attached an SSD to one of the USB3 ports to transfer some bulk data. Operation became unstable and I need to stop that Right Now. I do not recall if I tried using one of the USB3 ports on the Pi 4B. I just transferred the files over the network instead. Likewise when migrating to the larger HDDs I could only do that with two HDDs connected at a time.
While I was away from my home lab for a few weeks, I found the following status:
root@piserver:/home/hbarta# zpool status
pool: tank
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-9P
scan: scrub repaired 4M in 17:24:42 with 0 errors on Sun May 11 17:48:44 2025
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 1 0 0
wwn-0x5000cca0bee6a900 ONLINE 10 0 35
wwn-0x5000039a78c87b89 ONLINE 4 0 24
errors: No known data errors
root@piserver:/home/hbarta#
On my return, I checked dmesg
output and found problems starting with
[921468.327430] sd 1:0:0:1: [sdb] tag#12 uas_eh_abort_handler 0 uas-tag 7 inflight: CMD IN
[921468.335675] sd 1:0:0:1: [sdb] tag#12 CDB: Read(16) 88 00 00 00 00 01 7c 26 f2 28 00 00 04 00 00 00
...
All of the errors including when things settled down can be viewed at https://pastebin.com/VHQxyNZX. I performed an update/upgrade and rebooted before proceeding. Without further actions the status became
hbarta@piserver:~ $ zpool status
pool: tank
state: ONLINE
scan: scrub repaired 4M in 17:24:42 with 0 errors on Sun May 11 17:48:44 2025
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca0bee6a900 ONLINE 0 0 0
wwn-0x5000039a78c87b89 ONLINE 0 0 0
errors: No known data errors
hbarta@piserver:~ $
The time for the scrub seemed a bit long but I haven’t been tracking it. I’m running a scrub now and it looks like it is going to take about as long.
I’ve seen a similar issue once in the past. It’s possible that a drive is failing, though there is nothing in SMART stats to indicate that. There were errors logged, but I suspect they resulted from possible power issues or perhaps USB/firmware/driver problems. I’m blaming USB. I don’t recall any issues like this with all of the SATA connected drives I’ve used. The occasional errors logged related to power issues and did not result in any apparent operational problems (once clean power was provided.)
The upshot is, one can use USB, but there are likely to be occasional hiccups that are not otherwise encountered.
My $0.02US