Homelab NAS build and architecture questions

Hello!

Big fan of the forum, the LNL family podcasts, ZFS, and Linux in general. Even after 4 years of homelabbing and learning Linux I still feel like such a noob and am terribly indecisive and insecure in making long term architecture decisions for myself and wanted to run my plan by the brilliant minds here in the forum if that’s okay.

My background:

I have been slowly leveling up my Linux/ZFS knowledge and hardware over the past 4 years. I started with an old Mac mini with proxmox installed and OWC cages attached via thunderbolt, to an intel nuc with an OWC cage attached via usb (not ideal, I know). I currently use sanoid/syncoid and datasets are replicated to backup drives locally and offsite. I’ve run into very few issues so I must be at least somewhat competent even though my hardware is not ideal.

New build

I have finally invested in a proper NAS build so I can both expand as I reach 80% used with my current setup and properly attach drives directly via SATA. I have 6 12tb drives on the way. I’m leaning towards putting them all into a raidz2 pool and using proxmox to manage ZFS and my LXC containers (mostly local/offsite media streaming, but a few websites, pihole, &c. standard homelab stuff) since I’m rather comfortable and experienced with it. But my shiny object syndrome is making me want to take this migration opportunity to learn something new, I know I could go Ubuntu/debian with qemu and virt-manager or whatever but hesitant to make such a big switch for a ‘production’ server.

Questions

How would you architect/layout (6) 12TB drives for a new NAS homelab? Is L2arc and SLOG even necessary for my use case? I’m not as worried about performance as I am finding a middle ground between maximizing both redundancy and capacity.

If I decide to go this route, Can you transfer proxmox LXC backups to virt-manager painlessly? The containers are all backed up as ZFS datasets I believe, proxmox does so much I don’t even realize.

What else should I be considering for this sort of upgrade?

Sorry for being such a noob, I think I’m just looking for some reassurance in sticking with what I am familiar with and guidance with my intended pool layout. I also wanted to finally participate in the forum rather than talk to an inept chatbot.

Huge TIA to the community, I wouldn’t know anything at all without everyone here being so generous in sharing their valuable knowledge.

For six drives, either one six-wide Z2 or three mirrors. You’ll get significantly better performance out of three mirrors than a single Z2–especially if you’re doing much in the way of VMs.

SLOG isn’t necessary and is unlikely to help you out much. I wouldn’t even consider it unless you’re planning on a significant database workload (which really shouldn’t go on rust at all) or heavy sync NFS usage.

I wouldn’t really recommend a cache vdev either. They can help a bit–especially now that the L2 is persistent–but in my experience, it’s never the kind of night and day (or, if you prefer, solid state and rust) difference that you’re hoping for.

RAM, really. By default, ZFS wants to use half your RAM for filesystem cache. That can be overridden… But I don’t generally recommend doing so. Filesystem cache is incredibly important even on a traditional filesystem with a simple LRU cache, and it’s extremely valuable when you’re talking ZFS and ARC.

Thank you! I budgeted 32GB of RAM for this build which feels adequate for humble homelab use. Though I could swap it for the 64GB I have in my desktop if you’d suggest doing so, but would rather not if you don’t think it’s vital at these numbers. I’m unsure where I need the RAM more at this point, but that might be more of a personal problem I need to reconcile.

How much performance improvement are we talking with mirrors vs z2? I hate to lose ~12TB of usable capacity from my original plan of the 6-wide z2 but 3 mirrors sounds like the ‘enterprise’ way of doing things and I could be persuaded to do things the ‘proper’ way, I have room in the case for another 6 drive expansion (much) further down the road.

For some reason though more than 1 vdev scares me, if I lose a vdev I lose the pool, right? So if by slim chance two drives in the same mirror vdev fail the pool is toast. Can you reality check me on this irrational fear? Is one truly ‘safer’ than the other?

One more dumb question; I got these drives from serverpartdeals, ~32000 power on hours and no reallocated sectors. I’m running a long SMART test as we speak but wondering if I should run a badblocks as well? I know it’ll take awhile though and I’m getting antsy.

I appreciate the time you took to respond, it’s really cool to have insight from one of the literal experts on ZFS in here.

If all it’s doing is NAS stuff, you probably don’t need the performance. If you’re running VMs on it… Well, presumably you’re old enough to remember computers without solid state drives. They frankly sucked, a lot.

I bring this up because, for the most part, a six wide Z2 in a desktop like role is going to perform at about the level a single hard drive would. (Conventional RAID6 would be even worse.)

If that’s an acceptable level of performance, cool. But if you want to do much in the way of small block operations, mirrors will give you roughly triple the performance.

As far as safety goes, yes, if you lose both drives in any one two wide mirror vdev, you lose the pool. With that said, the odds of simultaneously losing both drives in a mirror are quite low in my experience… As in, I haven’t yet personally seen that happen.

That’s very likely because I actually monitor my pools, mind you. I’ve certainly seen plenty of RAID6 and Z2 arrays die because their admin thought dual parity was enough safety they didn’t need to bother with monitoring… :cowboy_hat_face:

In all seriousness, losing both sides of a mirror before it can be repaired is not a very likely failure–and it’s best addressed not by adding extra parity, but by having proper backup.

Do you have backup?

I appreciate the rationale, definitely something to consider then. My setup isn’t huge and I monitor my pools daily.

I’m using LXC containers mostly, in proxmox, but I typically put their root file systems on the NVMe I have proxmox installed on. I should have mentioned that earlier. The zfs pool is mainly media storage bind mounted to the containers as needed. LXC containers are backed up to a different zpool.

I am so proud to tell you that I have so many backups. Data loss when I was an artist in college losing my senior project proved its importance and 288 episodes of 2.5 Admins definitely reinforced it. The tech debt for the backup is mounting though, hence this upgrade. It is for sure my biggest, and I’m unsure when I crossed the line of just playing with Linux and ZFS to making it pretty much everything I run so I wanna get it right this time instead of just doing what I’m used to. I (try to) practice what you preach so I should rest a little easier knowing I have those backups. Thank you!