I want to set up ZFS. Looking for the best write performance I can get.
I have 3x 200GB SSDs, 6x 400GB SSDs 5x 4TB Hard Drives, and 2x SAS HBA supporting 8 drives each. I’ve only got one PCI X16 slot. I could use a riser and put the 2nd HBA in an X1 slot, but I think the performance penalty would be too high. The SSDs are all datacentre drives, with plenty of health left.
The host available is a Ryzen 3200G with 16gb of RAM. Looking to run Proxmox and with FreeNAS. Nothing crazy heavy.
I was considering using some SATA drives in order to be able to use more of the SAS ports, or perhaps just putting 2 of the 4TB drives on the 2nd controller.
I’m new to ZFS and wondering what the best cache setup would be. Most of the workload would be bulk transfers, video editing and torrenting.
Googling around seems to be causing more confusion than giving useful answers. Some are saying that I won’t have enough RAM. I also wonder, as the PCIE slot is on the south bridge would it be even more bottlenecked.
I will be trying my luck with an M.2 to PCI-E adaptor. There is also a 2nd M.2 slot, but I believe this also is connected to the chipset too.
I will need one of the 3 available PCIE slots for a 10gbe network card.
Looking to go RAID Z1 for maximum available space.
This is quite old a question, but I figured I’d reply in case it helps you or others in the future.
ZFS is pretty memory hungry as a rule - more RAM will be a bigger factor than worrying about PCIe lanes. That said, I also have ZFS deployed on a low throughput system that has only 2GB of RAM and ZFS ticks along just fine. Network bottlenecks are huge relative to storage speed - but I’m on 1Gbps local, and my internet is slower still.
Now, PCIe bandwidth will also vary a lot based on the motherboard / chipset. I think this is one of the details that resulted in your detailed question not getting a quick answer. It’s hard to give any advice here as there are a lot of variables, and you’ll need to benchmark and try things yourself.
It is unclear if you plan to throw all of those disks into a single pool, or run multiple pools. Resiliency plays a role here, as you group drives into vdev’s - you want to avoid having any single vdev in a pool failing. Some people run lots of mirrored drives, others pick RAIDZ1 - but drive sizes will matter for matching things up.
If it were me - I’d prioritize the network card, ensuring it got the best connectivity. Then I’d think about the video editing needs - because that’ll be the high bandwidth use case. The rest - I would not get too worried about and just see how it works out.
32GB RAM might be a good investment to help performance, but the 16 you have will work.
Once you get setup, and have a benchmark / performance configuration to try things - I think it would be interesting to hear what you’ve done and the results you get.