Somewhat related to Model recommendation for nvme mirror disks …
I have a very similar use case to @SinisterPisces (want mirror config for VMs, FreeBSD 14.x) but am trying to select a server motherboard that hosts multiple NVMe v3/4/5. However, most commonly the interfaces supported on a given board are not identical, e.g. one PCIe 4.0 x4 and one 3.0 x2, or one 4.0 x4 and another 5.0 x4. You might see a factor of 2 or more difference in theoretical speed. My question is: will OpenZFS shrug off such large disparities between drives in a mirror and, say, simply deliver the performance of the weaker interface (other things like drive size/model being equal) or will there be some ugly interaction that makes it worse?
I definitely notice the difference between Intel Xeon chipset and Epyc chipset. Compare X11 v H11 series SMC mobos. I like the Asrock Rack RomeD8-2T where I can place two similar NVMe drives next to each other.
Older SMC motherboards do not like booting from NVMe drives, H10 and H11 series boards need a firmware upgrade to boot from NVMe instead of SATA. I’m assuming that the design philosophy is your boot media is mostly RO and only written on upgrade. The fast NVMe slots would be dedicated to L2 ARC or Slog. The on board SAS controllers would be managing spinning media.
I like the ability to mirror my Linux root (/) and sadly am in the group where I would rsync my /boot to /boot2 in case I needed rebuild from a failure.
Welcome!
Take a look at the Asrock Rack boards. Even the ones that use consumer CPUs (e.g., Ryzens) have options with 2x NVME with matched speeds.
The current trend of multiple NVME slots having different speeds on more reasonably-priced boards is frustrating, but it’s unfortunately understandable as a symptom of Intel and AMD both having abandoned high PCIe lane counts on consumer CPUs.
You can also just put /boot on its own, very small mdraid1. Works fine that way.
There have been years of grub complaining about that not working. Feels like distros forced me to stop doing that over a decade ago. Has grub somehow improved ?
I think distros must have stopped you from doing that WELL over a decade ago, because I’ve been booting systems from mdraid1 for closer to twenty.
I have multiple ZFSBootMenu systems with mdraid1 /boot right now, as we speak. The only real difficulty is on the ZBM side; it’s a bit finicky to set up. Booting from mdraid1 itself isn’t a problem at all.
Ah, avoiding grub…wouldn’t that be nice sigh
The ZBM boxes don’t use GRUB. But I’ve only got a couple of those.
I’ve got easily 80+ GRUB booting from mdraid1, with an ext4 root (including /boot) stored on that mdraid1.