Hello,
After a discussion over on the TrueNAS Community Forum, I started thinking about this. As a home/SOHO user, trying to teach yourself what components to use to build your first ZFS-based server is extremely difficult, because … to put it mildly … before you can even start asking questions you have to learn how to ask the right questions. ZFS’s learning curve is a fairly sheer cliff, even discounting all the years of outdated information that looks like current gospel if you don’t know any better yet.
I think this would be more of a general ZFS topic. ZFS tends to drive hardware requirements more than the OS that’s using it, I think.
Paraphrased a bit from the discussion over there…
I always encourage people to ask for help. I almost always learn something when I try to help them–particularly when I struggle to explain something and realize that I don’t know something as well as I thought I knew it–and we’re all here to learn.
Having a searchable collection of known working systems–particularly those put together by people in home/small office environments without huge budgets–would be an amazing, constantly updating teaching tool.
A template for how to describe your build (what hardware/software, optionally: intended performance, actual performance, warnings, etc.) could be useful as well. Again, what could we steal from PC Part Picker’s templates? 
I agree, we should immediately do this, right after we’ve finished the task “let’s make a list of all the streets that work with an Audi A4”.
If the underlying hardware works fine, ZFS should probably work on anything. Apart from “have a little bit more RAM in the system for the ZFS internal caches”, maybe a list of stuff that needs to be avoided makes more sense?
- SMR Hard Drives
- Hard Drives connected using USB Adapters
- maybe a list of NVMe flash drives that are known to misbehave could make sense
I like the idea of including a “common mistakes to avoid” preface.
The audience I’m thinking of doesn’t have the background to know that “the underlying hardware works fine.”
There are a great many people who want to save money and get the exact features they want by building a storage server instead of buying an overpriced prebuilt storage appliance that doesn’t have all the features they want in the form factor they want.
Having the desire to build a server is not the same thing as having the knowledge. And while it’s possible to teach yourself without help, the mistakes you make as you learn will be expensive. I did it that way, and would prefer to help others avoid making my mistakes if possible.
I have a computer science degree, work with AI and ML, and already had a functioning Proxmox cluster running when I started trying to teach myself the basics of RAID and ZFS, and the curve was still steep, especially on the hardware side. In terms of background, I was better prepared than most, and still struggled trying to reverse engineer what I actually needed to know from decades-old documents that Google gave preference to and newer guides that were written for large commercial server farms.
As a bonus, people just getting started that are acting with some up-to-date guidance will be more confident, as well. Spending a ton of money without being fully confident in what you’re doing is not a fun time.
1 Like
I’ve built a couple of systems with consumer hardware, Intel Core and AMD Ryzen CPUs, cheap “gaming” motherboards, no ECC because that stuff is hard, no backplanes, small NAS cases or occasionally a Fractal R5 or Define 7XL if i’m feeling fancy.
Just any PSU with enough SATA power plugs (or SATA-Molex adapters but not too much as not to fry anything) and some basic HBAs from ebay or whereever that claim to be Truenas compatible (i don’t run Truenas baremetal anywhere but that’s good enough for me).
So far, ZFS has not let me down. It’s definitely worth looking into proper and consistent device references (/dev/disk/by-id) even as an amateur like me because if you slap a hypervisor like Proxmox between your hardware and your application server OS (and you want to manage your HDDs inside your guest OS for some reason, like i do) then you probably want to forward the HBA to the VM to get those references working properly.
The only issues i have with ZFS are poor performance (atime is off!). That’s likely because i don’t have the resources to build a second “production” homelab server to test out tuning parameters or something. I can’t use my main server or my backup server for testing, so any improvements in configuration will have to wait for when new hardware is due.
At least i’m pretty happy how everything has worked so far and with zfs send/recv sync of snapshots being my backup i’m also confident that my data will survive.
1 Like