Raid0 adjacent to ZFS pool, any considerations?

I am looking to drop in 2x 1TB SAS drives onto a Super micro 846 back-plane and stripe them in Raid0 as basically a scratch disk. I probably would not need more than 1TB of space here but I do know I will never need more than 2TB and the speed boost is fun and the drives are basically free.

This same SAS2 back plane & HBA houses an 8 disk Z2 ZFS pool,

I am only a few weeks into ZFS, Its working beautifully so far & I hope to continue that trend. Are there any considerations when using other file systems / volume managers in a ZFS file server or are they fully independent? I think they are but I just want to confirm, would hate for a flaky scratch disk to take down my pool.

Host is Debian 12 bookworm any preference for Raid0 between mdadm, lvm, & btrfs?

Anything else I should consider?

Unless I’m missing something, the straightforward answer to your first question is that faulty disks don’t take down other disks on the same controller (unless by some extreme fluke the failure is an electrical one that affects (data on) the bus.

Re other considerations - bandwidth and CPU usage are the obvious ones that spring to mind. My initial thought was “if speed is so important/useful and data safety isn’t, why not drop in 2x 1TB cheap SATA SSDs in JBOD”? Probably much faster and cheaper than any spinning SAS drives. At a guess (i.e. without checking specs etc), likely more reliable too.

Probably Linux and ZFS experts here can throw in a few more specific considerations…

Agreed an ssd would be far faster Still, but these drives were free and will have no concerns about write endurance. The primary speed need here is when I write out to the pool, just trying to get closer to the pools speed.

1 Like

The only thing I’ll point out here is that you could do a second pool of single disks, instead of a separate RAID0.

root@box:~# zpool create z2pool raidz2 disk0 disk1 disk2 disk3 ...
root@box:~# zpool create raid0pool disk4 disk5 disk6 ...
root@box:~# zpool list
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
z2pool    1.77T  1.18T   599G        -         -    51%    66%  1.00x    ONLINE  -
raid0pool 7.27T  3.01T  4.25T        -         -     0%    41%  1.00x    ONLINE  

(I know, the actual numbers in that zpool list don’t make sense for newly-created pools. Don’t @ me; I just copied and pasted the output of an actual zpool list on an actual two-pool machine without bothering to munge the numbers!)

1 Like

Oh, I did not know one could have multiple pools on one machine.

So obviouly a raid 0 pool is far less reliable than than the Z2 pool and the complete failure of this second pool is just a question of time, that’s fine for this R0 pool.

This R0 pool would fail alone correct and Have no effect on my primary Z2 pool?

Correct, one pool failing has no bearing on any other pools on the system.

1 Like

Thanks for the clarification - you didn’t initially say that the drives were free, which changes a lot of parameters! :slight_smile: I’m all for re-using old/surplus/free anything - go for it - just be sure to understand the risks/options…

I still use Windows 7 (and XP for a specific use-case :blush: ), but not for everything and it’s not suitable for everyone :slight_smile:

I did not want to go to win10 either, so when support for win 7 ended I let go of my dual boot, and went straight Linux, Learning accelerated when I suddenly had to, I should have done it a decade ago.

2 Likes

Not 100% exactly… If you have a USB-based pool and a disk fails, the zfs/zpool commands can “freeze” – (yes, I’ve seen it) which will usually require a hard reboot and a removal of the bad disk to clear the condition. You may be able to detect what’s going on by going thru the syslog looking for disk-failing messages

That’s not a pool failure problem, that’s a hardware failure problem, and occurs regardless of filesystem.

Doesn’t HAVE to be a USB drive to cause it, either. A deranged hard drive can lock up the entire SATA bus and render the entire system unusable until it’s pulled; this is FAR more likely to happen with consumer drives (eg WD Blue, Seagate Barracuda) than professional drives (eg WD Red, Ironwolf) but I’ve seen it happen with both.

:nods: This is why we recommend NAS-rated drives, and SAS cards :wink: