On a recent 2.5 admins @mercenary_sysadmin mentioned using something like proxmox to host whatever NAS option you like. He then mentioned nested ZFS as not the best idea.
My question: When going this route, I’m a bit unclear of where you keep ZFS? On the proxmox host and share out storage OR have proxmox use a standard fs like ext4 and then run Xigmanas / Truenas with ZFS on your storage.
In the past I’ve used vanilla Debian using KVM / libvirt and host vm’s this way but not integrated ZFS.
I strongly recommend OpenZFS on the host, with simpler traditional filesystems (ext4, UFS2, or NTFS as appropriate) on the guests.
Again, there is no reliability issue with nested ZFS, to be clear–just potential performance issues. And those can in many cases be worked around if you really need ZFS on the inside as well as the outside. But it’s thorny and difficult and a PITA that’s best avoided if you might care about performance and don’t really need that specific scenario.
A NAS va is just fine, but if you’re using Proxmox, my preferred method is to do all of the ZFS stuff on PVE and create an LXC container for samba. You can bindmount a dataset for that purpose and use the Cockpit web-ui if you’re into that kind of thing. Works great and it’s pretty light weight.
Is there advantage to this LXC approach versus just sharing Samba/NFS via proxmox directly?
Asking to learn. I currently do my NAS sharing via Samba in VM with nested ZFS. I recognize this is not ideal and want to change, so wanting to learn what to change to.
There’s a school of thought that says you don’t install anything at all on the hypervisor. It’s cleaner and if you screw that up, everything underneath it gets screwed up too. Plus you have the benefit of easy snapshots of your guests if each service is its own VM or container.
At its heart, Proxmox is just a gussied up Debian system, so, especially for a home setup, I think there’s a reasonable argument for just installing samba on the host.
There’s no one way; each approach has its trade offs.
A bit late to the party, apologies - an alternative solution that hasn’t been mentioned: if you can pass-through a (non-RAID) disk controller from the host to a VM, then run TrueNAS in the VM and have TrueNAS use the passed-through controller for its ZFS drives. That’s how mine is set up. TrueNAS’s docs advise against running it in a VM for production use, but for a HomeLab it works fine.
I recently rebuilt my Ubuntu + ZFS + LXC home NAS to run on Proxmox. Having everything in containers was really helpful with the migration. I started but putting Proxmox in a VM under the old Ubuntu host, and migrated everything that didn’t require access to ZFS. Then I flipped it around, ran the old server in a VM and migrated the rest.
This specifically helped with my Samba migration. Unfortunately, there is a critical unsolved bug in Ubuntu’s samba package that breaks authentication. With Samba in a container, I can easily test upgrades and rollback if it’s not fixed. As well, I could migrate the existing container without having to fully upgrade at the same time as all my other services.
The one thing you lose is the ability to use zfs share, but really I don’t miss that. I’m pretty sure file versions and history works too as long as you bind the root of each zfs dataset into the container, but I haven’t actually tested that in a while.
One thing I’ve found is painful is to try to run any sort of zfs tooling within containers. I use zrepl for snapshots, and thought I’d put that in a container. But, you have to make sure the container OS is kept in sync with the host, that packages like zfs-utils are compatible with the host, and so on. At that point, I figured it was just better to put on the host.