Hi all,
I’m planning to set up a new Linux server for hosting lots of e-mail (very many accounts) and related applications (Exim, Dovecot, Apache, PHP-FPM, Roundcube…). I’ve used Adaptec controller + LVM thinpools for years, but this time, I’d like to make the server more resilient against outages by using an LSI HBA + ZFS (on RAID1 mirrors) instead. Mainly I want to rule out lengthy XFS filesystem checks and quota checks on multi-TB arrays after a crash, during which the system services can’t function properly, and also to take advantage of ZFS features like frequent snapshots, quick block-level updates for backups, and quick periodic updates of all data on a replica server, which will be ready to take over if needed. Also, I’d like to use something (ZFS) more admin-friendly than LVM.
Disks, ZFS filesystems with user quotas, and the user accounts will be set up on the host, which will function as a storage backend for several applications. But I want those applications to run in VMs hosted on the same box (for performance and for not wanting to pay for additional physical boxes), so I need the ZFS files to be equally well accessible in two Linux VMs (KVM, qemu) running on the same host - a production VM, and a test/dev VM.
(I want to run the applications in VMs, not on the host, so that I can work on new versions of the applications and test them in a dev VM, while the production VM is available to users at the same time. Why? There’s no testing like on a production server, with the real data. After the new set of apps is ready, I can just switch them to production role.)
There are several ways to achieve sharing of the host ZFS filesystem into a VM virtual filesystem, but from past experience with KVM+qemu and some rudimentary stuff I remember reading on this, I expect them to have quite diverse performance, reliability and gotchas in nonstandard situations.
I know of these methods of fs sharing to a VM:
-
NFS (never deployed it, but I worry about bad performance with sync operations, and problems with unmounting the filesystem in the VM);
-
SSHFS (easy to set up, but based on experience, I expect severely degraded performance in the VM);
-
9P folder sharing (don’t know much about it, expecting mediocre/bad performance, worry about maturity and future of this tech - Redhat/IBM does not seem to have much interest in desktop and KVM these days).
I’m not sure how these options fare with quotas in VM though.
What I really need in the VM is quotas.
The ideal (not sure if it exists) way would be to share the ZFS filesystem in such a way so that in the VM, it is seen as the same ZFS filesystem as well, manageable with zfs and zpool commands.
What do you think is the best option for the task? The most important things are reliability+quotas in the VM, and then performance. The least important is being easy to setup and nice to use; I can manage with bad UX and gotchas.
If a performant solution exists only for sharing ZFS into containers, but not into VMs, I might try containers instead (e.g. via Incus).
I’d appreciate any comments/advice you may have.