I’m considering a redesign of my TrueNAS homelab NAS and would like to get input on the theoretical best approach. I understand that the only way to know the best configuration is to test with my workload, but that is quite hard to do as I do not have an extra test NAS. I am primarily looking to see how to improve performance for VM and homelab selfhosted services storage. (More IOPS!)
Current HomeLab storage setup:
- TrueNAS primary NAS (approx 10TB of data, mirrored pairs, detailed specs below) for personal docs, family photos, VM disks, back end storage for docker services, backups from desktop PCs.
- Synology (approx 20TB of data, SHR-1) for media server files, as I felt it was wasteful to store this on TrueNAS with 50% storage efficiency
- Primary Proxmox server with 10G link to TrueNAS
- Secondary Proxmox server with 1G link to TrueNAS
TrueNAS Specs:
- ZFS pool with 2x mirrored vdevs (no SLOG/ZIL, special metadata vdev, or L2ARC)
- 48GB RAM (planning to upgrade to 64GB soon)
What I’m Debating:
Option 1: Two Separate Pools
- HDD Pool: 6x HDDs in a RAIDZ2 vdev (for documents, media, photos, etc.)
- SSD Pool: 3x SATA SSDs in RAIDZ1 (dedicated to VM disks and homelab services)
Option 2: Single Pool with Performance Optimizations
Option 3
- Something else? Open to other ideas.
Why RAIDZ instead of mirrors?
- I’ve long thought that mirrors are the superior option for HomeLabbers due to flexibility in expanding your NAS. However my understanding of recent improvements in RAIDZ expansion and upcoming ZFS rewrite feature are total game changers here, allowing for much better storage efficiency while also expanding 1 or 2 drives at a time.
- Let me know if I should bite the bullet cost-wise and continue to use mirrors. I do not have a ton of data here, so I could in theory.
My Goal:
Both options would handle docs, photos, media-server data just fine. I’m trying to figure out which option will give me better performance for homelab services, essentially VM disks and Docker database storage. I’m also open to hearing if other setups might make more sense based on your experience?
Looking for feedback on:
- Real-world performance differences between the two
- Whether L2ARC/SLOG/special vdevs on a HDD pool are worth it in practice compared to an SSD zpool when my capacity need is only about 2TV
Storage size context:
- docs/pics/media server/backups - approx 30TB
- VMs/HomeLab stuff that needs performance - approx 2TB
3-2-1 Backup:
As a note, I have an offsite TrueNAS that ALL data (except media server) is backed up to, regardless of the storage layout I am debating above. A subset of my most critical data is also backed up to a cloud provider.
TLDR - Can 1 vdev of 6x HDDs in RAIDZ2 with enhancements such as special metadata device, L2ARC, ZIL/SLOG beat the performance of a basic SATA SSD zpool with 3x drives in a RAIDZ1?
Thanks in advance for this community’s input.
An array of mirrors is probably better - quicker to rebuild a failed disk & easier to upgrade the size of the storage. If you need to save money on disks just search Reddit for “schucking disks” - covering the first 3 pins works instead of going cross eyed trying to cover the 3rd pin. WD Element external usb drives are good to “shuck” (usually helium drives) - mine have been running for 5 years now. Used Toshiba Enterprise HDD also a good choice - never had a problem in 7 years with these (& use them for /var
& Steam to save nvme
wear & tear)
I recently bought some used Enterprise SSD’s (another good choice to save $) - & created a separate SSD stripe for my Steam library - & created partitions for a mirrored special device to use with another HDD stripe to speed up file listing / search times.
Finally don’t use vm’s (unless you really, really need the extra separation for security purposes) - “rootless containers” give you almost the same amount of security - with much better performance. On Proxmox Alpine Linux LXC containers can probably do almost everything a vm can do - updating an Alpine LXC from version to version is as simple as changing /etc/apk/repositories
(due to being basically just a busybox
with openrc
)
Switching from Docker to Podman makes “rootless” very easy & the containers can easily auto update themselves (you just add an auto update label to the container) - unlike with Docker where you need another container(s) to update your containers. Once you have a group of podman containers running - it’s simple to turn them into a “pod” - & once you have a “pod” - it’s a single command to convert them into kubernetes yaml to run in a real cluster. I run my podman containers as a service from “quadlet” .container
files (see podlet
on Github) - no need to have an always on container daemon running as root (a security risk)
I may be a bit biased as I’m at home in a terminal from running a Linux desktop for more than 20 years & don’t need a GUI - but it’s also a better way to learn the intricacies of new things - & you get more control. My stuff tends to “just work” & I have the freedom to run everything on my Arch Linux workstation (Endeavour OS - nice installer). I only use Windows for gaming so that runs in a vm with hardware passthrough of an nvme / sata / ssd / GPU (with around 97% of bare metal performance).
Running KVM / LXD / podman gives you all the flexibility you need to run things at home. Some other ideas for experiments - RKE2 & Talos Linux to learn kubernetes.
Another well kept secret for running containers is OpenSUSE’s MicroOS. For as long as I can remember my remote servers always ran Debian or Ubuntu (or for a few years Alpine Linux) - about a year ago I switched everything to MicroOS - & then didn’t touch them for 6 months - nothing was broken or needed any attention & everything was fully up to date - bare metal (due to being a rolling release) - & the containers (from auto update labels). The only disadvantage with MicroOS I’ve found is you cannot enforce signed kernel modules with zfs
(not a mainline package). Being able to build a customised MicroOS iso installer with “kiwi” is very, very, very useful - & was how I switched from Ubuntu so easily (every server identical with practically zero post-install configuration)
Just my 2-3 cents for low maintenance setups that “just work” - always prefer the safety of your data over any perceived “lost” space - disks are cheap relatively.
Hope the above gives you some ideas to try something new - I think you’ll find you don’t really need the extra performance you think you need - & you’ll have more time to spend on development instead of administration (which was my major goal)
I currently have an array of mirrors and while I generally agree with you, I have very little data that needs to be performant. Approx 1.5-2TB. I can easily build a small SSD array of mirrors for this.
The remainder of my data falls into two buckets:
- Docs, family pics, backups, etc. - Approx 8-10TB. I definitely want on ZFS, but RAIDZ2 offers totally acceptable performance.
- Media library - approx 15-20TB - This I actively do not want on mirrors. I can’t justify putting plex hoarding on an array with 50% storage efficiency.
Because of point 2 above, I’ve always had a ZFS nas for most data and a separate NAS for media. The media NAS was originally Unraid, then Synology. Now I’m thinking of bringing it onto TrueNAS so that I don’t need the complexity of a separate host.
What enterprise SSDs do you like? I’ve been looking for some myself and less experienced here. I’m generally looking for drives that are 1) have quite high write endurance and 2) are pretty cheap on eBay per TB. I don’t need the latest and best performance, any SSD is dramatically faster than spinning rust.
I bought 2nd hand Enterprise SSD:
- Intel D3 S4610 1.92 TB
- Samsung SM883 1.92TB
They have similar endurance so mixing them in a stripe or mirror is fine.
In the past for a rock sold zfs
experience with large amounts of data I would go FreeBSD (I managed 100TB on BSD for years with zero problems) - Linux zfs
modules nowadays are good enough & having a quick read of the latest TrueNAS release - they think the same.
Also very interesting is TrueNAS now includes incus
for running LXD containers & vm’s. I just love Arch Linux & bought a massive case for my workstation that can accommodate 12 disks (so I don’t need a separate NAS)
I wrote a python module for creating custom LXD / LXC containers for incus
which you may find useful:
TrueNAS I think is the way to go. A colleague runs it & does vfio
with it (hardware passthrough)
For RAID-Z(2|3) I’ve no experience of it as such - I’ve just seen a lot of people on various forums having problems with it. I would probably have a hot spare in the array - or as I do now have an external WD Element USB I can schuck in an emergency to provide a disk - but with SMART monitoring I will get plenty of warning of problems.
I think the answer comes down to:
- do you want 25% extra space with RAID-Z2
- or less problems with an array of mirrors (which will also be easier to upgrade)
I believe Jim (the owner of this forum) recommends mirrors for these very reasons.