Hello there.
A few months ago I set up ZFS for the first time after listening to Jim and Allen talk about it for a while. So far it is working well, but I would appreciate a sanity check on my setup, and hopefully get some answers to a few questions.
First, the server:
Dell PowerEdge R630
Drives: 1 TB Seagate Drives x8
Host OS: Ubuntu 24.04
The Setup:
I use Incus to manage my containers and VMs. Incus supports using an existing ZFS pool for its storage. I have a backup workflow which exports container snapshots and sends them to another machine, so my goal for ZFS was more for the fault tolerance and data integrity features.
Since I’m not ultra concerned about the host operating system, I installed Ubuntu 24.04 on just one of the drives. Even if this drive goes out, I have all the Incus containers and VMs backed up elsewhere. Since the drives are relatively small, this also gives me more disks to work with.
My plan was to use Raidz2 so that I could have the capaticy of 5 drives, but with fault tolerance for 2 drives.
When I began to create the pool, I realized one of the drives was bad. This left me with a Raidz2 pool of 6 drives.
Questions:
Overall, does this setup make sense for the use case I described? Are there any problems I didn’t consider with this setup?
I have a 1 TB drive I can use to replace the faulty drive, but it’s a Dell drive.
- The Dell and Seagate drives appear to be the same in RPM, can the Dell drive be used to replace the faulty drive?
- Can a replacement drive be used at all to expand the existing Raidz2 pool?
Again, this is my first time using ZFS so please let me know if I’m assuming too much or referring to something incorrectly.
You can mix drive models just fine, as long as they are the same size.
(Heck, you can mix speeds too, if this is a home lab/personal kind of thing, although it’s less than ideal.)
RaidZ doesn’t expand easily. If expansion or speed matters to you, you’d probably be better off with three mirrored pairs (e.g. roughly Raid 10) which can easily be expanded with any new pair of drives later. But you loose capacity that way, and change the chances of data loss, since the wrong pair of drives failing would blow it all up. But the “right” three drives could also fail and leave you ok.
It’s your choice, and the value of your data, and your budget, and your comfort levels with risk and restore are what decides it.
I would keep in mind it sounds like you’re using previously enjoyed hardware, so you really do want to plan for drive failures.
Six wide is an ideal size for a RAIDz2 vdev.
The major issue is that you should not expect the performance of six drives–or even four. You should expect, with a container and VM workload, roughly the performance of one drive, for the most part, with some exceptions (both better performance, and worse performance).
If you need higher performance, you want to look at narrower vdevs. With six drives total, this would mean either two three-wide RAIDz1 vdevs, or three two-wide mirror vdevs. In both cases, you’d be dropping down to single parity/redundancy, but the 2x 3-wide Z1 setup will roughly double your performance and the 3x 2-wide mirrors will better than triple it.
Thank you for the reply. You are correct, these are used drives which is why I was trying to choose an option that gave me at least 2 drive failures before data loss, especially since this server is at another location.
When you say RaidZ doesn’t expand easily, does that mean the current pool is essentially locked into the current number of drives? What is the normal approach to expanding storage capacity under ZFS?
You expand a pool by adding vdevs, typically in the same configuration as your existing vdevs. If you’ve got a six wide z2 vdev, that means in order to get additional space, you either add a second six wide Z2 vdev, or you replace all six drives in your existing vdev with larger drives, one by one, until finally when the sixth and last one finishes resilvering in, then you get to actually use the additional capacity of the larger replacement drives.
By contrast, with a pool of eg 2 wide mirror vdevs, you can get access to additional capacity by adding a new mirror vdev (requiring only two drives) or by replacing the drives in any ONE of your existing mirror vdevs (again, only two drives necessary before you get access to the additional capacity).
Your performance is also significantly higher with narrower vdevs, and even more so so with mirrors (which get an x2 read IOPS bump in addition to already being the highest-performing vdev type).
But you do need to pay attention to the pool health, and be ready to step in with replacement drives on a timely basis. The catch is, you already really need those things, even with Z2, so…