My question is about the initial install of Proxmox to mirrored SSDs and perhaps mirrored Optane.
I’ve spent the past year learning about ZFS and buying hardware. My goal is to migrate my world from Windows Servers to Linux/Proxmox/TrueNAS/etc. I have little, real-world experience with Linux, but decades of experience with Windows Servers.
I wish to keep Proxmox isolated to a management network and use TrueNAS to serve the LAN. That means the HDs, Network, and other hardware will be passed-thru to TrueNAS, running in a VM.
My concern is to provide Proxmox with an efficient install for running VMs. I plan to begin by installing 2 x 960GB SSDs and 2 x 118GB Optanes, the theory being that they’ll be highly over-provisioned for longer life. I have 2 x 960GB Optanes to pass-thru to TrueNAS when I install it later.
The Question: How would you install Proxmox, given 2 x 960GB SSDs and 2 x 118GB Optanes? Or, should I be doing something different?
I would probably mirror the optane drives for the boot pool.
Assuming you’re going to be installing spinners for TrueNAS to use, I would mirror the other two drives as well for virtual machine disks.
Just make sure that when you’re setting up the TrueNAS VM that you pass through either the HBA you’re using or directly pass through the disks. (I’ve also had luck in the past with passing through the onboard SATA controller on some motherboards.)
Thanks. I assume you’re talking about ZFS mirrors, with Root on ZFS on the 118GB Optanes. I don’t yet have any experience with Proxmox. I’m guessing I’d begin by installing just the Optane drives, then install Proxmox, then boot back up after installing the 960GB SSDs and create a new vDev and Pool for the VMs.
Why do you prefer Optane for the Proxmox install and the SSDs for VMs? (thanks again. This is exactly the kind of advice I was looking for.)
It’s been awhile since I’ve looked at it, but I believe the proxmox installer supports zfs on root out of the box.
You should be able to just select which disks you want for which pool.
The reason is pretty much just the size. Proxmox does not take up much space on its own, but your VM disks probably will. I would much rather have a terabyte to play with for virtual machine disks and images than the tiny optane drives.
My little system is similar to what you are aiming at I think, and I second what @bladewdr says, with the addition that you could put your VMs on the Optanes as well, if you think there will be enough space and so passthrough the big SSDs to TrueNAS. As @bladewdr pointed out, Proxmox needs very little disk space - my PVE that is hosting TrueNAS Core is using just under 10GB of the 30GB I gave it.
PVE can definitely be installed on a ZFS mirror, which you create during installation - 2 of my 3 PVE instances are installed like that, with everything (PVE, ISOs, VMs) on the mirror.
The 3rd isn’t simply because the laptop it is on refuses to boot from its ExpressCard slot which has a PCi-E SATA controller with a pair of SSDs. The onboard iRST SATA controller is passed-through to TrueNAS, so I have to boot PVE from a USB SSD instead. Once running though, PVE was able to create a ZFS mirror on the ExpressCard SSDs, which I use for the VMs (TrueNAS + a minimal Windows server).
In total that laptop has 8 SATA SSDs attached : the USB boot drive, the ExpressCard VM mirror, and 5 for TrueNAS (RAID-Z2) on the onboard controller (would be 6 but the laptop + dock only gives access to 5 SATA ports despite having a 6-port controller!).
Why would you use pass-through? By this, I think, you prevent Proxmox or anything on Proxmox (like containers) from accessing the ZFS (without going through TrueNAS).
I use (priviledged) Proxmox containers with Samba and let Proxmox have the ZFS, so having full flexibility. You can migrate via dedicated container (you can run multiple different containers on the same ZFS dataset). Samba then does not see snapshots, but zfs send etc. all do work fine on the host (i.e. Proxmox). On Proxmox you can “apt install sanoid” and get up a first autosnapshot system really quickly, I think.
Of course, TrueNAS and other commercial solutions may offer much more than plain Debian+Samba (or whatever I’d run in the containers), like web front ends for configuration and certain tasks.
What I would do?
I have many little PVEs and I’d install Promox on the mirrored SSDs, someone here surely can say how to include the Optanes in best way. I’d make a RAID-Z3 or such on the spinnning disks using Proxmox Web GUI, use Web GUI to download “template” debian 12 (or whatever you like) predefined image (Promox by default includes some turnkey linux image repository), create a new container, not forgetting to set it priviledged (by default, it is non-priviledged, but then Samba cannot handle the Windows-ACL-stuff well or it is getting complicated), on PVE SSH shell e.g. zfs create rpool/Homes && zfs set acltype=posix xattr=sa rpool/Homes, add to (stopped) container id 100 like pct set 100 -mp1 /rpool/Homes,mp=/homes, all this goes straight and quick, and have a krb5.conf and smb.conf, which actually takes most time and testing I think, but is out of scope here.
I have PVE itself (in 2 out of 3 cases) and the VMs(in all 3 cases) running on a ZFS mirror, and the passthrough is for TrueNAS’s own ZFS array - which as you say makes it only accessible through TrueNAS’s shares.
For me, PVE is simply a means to an end (i.e. running TrueNAS) - previously I was using ESXi - and TrueNAS is also a means to an end (mainly as a NAS). I hate the hassle of having to do CLI stuff (I’m not a hardened sysadmin, my background is in the business logic side of things) and like the OP I’m more familiar with (older) Windows stuff than Unix so I like the convenience of TrueNAS’s GUI for configuring my datasets and ACLs (which are non-trivial for a SOHO).
I was once-upon-a-time an Oracle test analyst writing sed and awk scripts on Tru64 (that dates me!) but I’ve no wish to do that kind of thing any more - Salesforce development is more to my taste
It’s not possible to build a “little bit” management network. Either it’s a management network or it’s not. The bare-metal OS needs to be a hypervisor on a separate management network. Your router management and the rest of your infrastructure management should also be on the management network. It’s a security thing.
It’s likely we’re about to learn the Baltimore Bridge was hit by a ship in which the control mechanisms were attached to the same network that connects to the internet.
Ahh I see, yes, for some security, also possibly for safety (making DoS more difficult).
(But you wouldn’t put the NAS, which contains all the data, not on the management network? From another viewpoint someone could argue that it actully is the data that is to be protected.)
the Baltimore Bridge was hit by a ship
what a tragedy, my thoughts are with the poor families. (I don’t think a management network of its virtualisation environment could have prevented the accident, and for real security (or here, actually safety), personally I would not consider virtualisation as best option. I think it should be kept as simple as possible (and still hard enough to get it right).)