KVM virtual machines using Virt Manager on ZFS

Hi all,

I was just wondering what the correct type of storage pool to select when creating a VM on a ZFS pool using virt-manager, do you select the type as dir or zfs?

Any help would be greatly appreciated,

Thanks

I would recommend using dir. This does mean you’re creating the actual storage on the command line, at least in part:

root@yourbox:~# zfs create zroot/images/newvvm

… which you can then follow up by adding /images/newvm (or however it’s mounted on your filesystem) as a dir type storage pool in virt-manager. You can then add virtual drives in the form of either sparse files or qcow2 files within that dataset.

I’d also recommend a separate dataset for each VM, and typically, for each virtual drive (if you’ve got more than one in the same VM). This allows you to tune them for performance individually, snapshot and roll them back individually, etc.

1 Like

Fantastic, thank you very much :slight_smile:

Hi again :-),

I was just wondering what the performance implications of using ZFS on the host with EXT4 in the VM vs using ZFS on the host with EXT4 on top of LVM in the VM?

Cheers

Hi!
If you accept my recommendations, I don’t know which OS are you using, but if you are using any Debian like (Ubuntu, etc) you can install libvirt-daemon-driver-storage-zfs package

root@lhome01:~# dpkg -l |grep -i libvirt-daemon-driver-storage-zfs
ii  libvirt-daemon-driver-storage-zfs        9.0.0-4                             amd64        Virtualization daemon ZFS storage driver

Now you can add to virt-manager storage an entire zfs pool or a simple zfs filesystem (look my PC) where you can create N zvol (not datasets) as you wish/want.

root@lhome01:~# zfs list
NAME                                       USED  AVAIL     REFER  MOUNTPOINT
zlhome01                                  2.71T   298G       24K  none
zlhome01/HOME                             1.97T   298G       24K  none
zlhome01/HOME/cmiranda                    1.95T   298G     1.13T  /home/cmiranda
zlhome01/HOME/root                        23.6G   298G     10.2G  /root
zlhome01/LIBVIRT                           740G   298G       24K  none
zlhome01/LIBVIRT/LHOME.bookworm01         9.75G   298G     9.25G  -
zlhome01/LIBVIRT/LHOME.data                129G   298G      129G  -
zlhome01/LIBVIRT/LHOME.porthos            11.5G   298G     11.5G  -
zlhome01/LIBVIRT/ORALAB.centos7a          2.22G   298G     2.14G  -
zlhome01/LIBVIRT/ORALAB.esxi7a             729M   298G      729M  -
zlhome01/LIBVIRT/ORALAB.ol610a            3.35G   298G     3.30G  -
zlhome01/LIBVIRT/ORALAB.ol78a               12K   298G       12K  -
zlhome01/LIBVIRT/ORALAB.ol8a              8.28G   298G     5.59G  -
zlhome01/LIBVIRT/ORAWORK.mintocs01        19.6G   298G     19.6G  -
zlhome01/LIBVIRT/ORAWORK.oud01             216G   298G      194G  -
zlhome01/LIBVIRT/ORAWORK.oud02             114G   298G      113G  -
zlhome01/LIBVIRT/ORAWORK.w10neo50s          12K   298G       12K  -
zlhome01/LIBVIRT/ORAWORK.w10optiplex9020  69.7G   298G     69.7G  -
zlhome01/LIBVIRT/ORAWORK.w11h170n         29.2G   298G     29.2G  -
zlhome01/LIBVIRT/ORAWORK.w11optiplex9020   128G   298G     84.1G  -
zlhome01/LIBVIRT/ol7a                       12K   298G       12K  -
zlhome01/VAR                              13.6G   298G       24K  none
zlhome01/VAR/lib.docker                   4.16G   298G     4.16G  /var/lib/docker
zlhome01/VAR/snap.lxd                     9.45G   298G     9.45G  /var/snap/lxd/

I suggest creating an individual zvol in the command line with the sparse method with

zfs create -s -V 128G tank/libvirt/VMDISK

And you have 2 ways for tell to virt-manager to use zfs as a storage:

The first method is when you create a new filesystem add to virt-manager using assistant, in my setup my ZFS filesystem is zlhome01/LIBVIRT
Now you can see only zvol in devices list
Look this images:
https://imgur.com/a/PfanVu0

The second method is using virsh pool-add like I post in this tweet:
https://x.com/Mstaaravin/status/1456420002706477067

Why do I use zvol instead qcow2 inside a zfs dataset…?
Because I can use native ZFS snapshots for my VMs and I consider it’s more powerful

  • Sorry for my english, isn’t my native language
1 Like

@mercenary_sysadmin Any particular reason why you are suggesting qcow2 storage format? It doesn’t make much sense to have yet another Copy on Write storage on top of ZFS?

Also from workload tuning - Virtual Machines openZFS documentation:

Virtual machine images on ZFS should be stored using either zvols or raw files to avoid unnecessary overhead.

If you want to use qemu migration and hibernation tools, you need the qcow2 format.

ZVOLs significantly underperform both raw files and qcow2 files, btw.

If you want to use qemu migration and hibernation tools, you need the qcow2 format.

This is not a feature I have needed to look into, hence my lack of knowledge. Would you mind sharing any details as to what tooling requires qcow2 format? I can’t see anything in particular in libvirtd that would require qcow2 where raw isn’t possible.

ZVOLs significantly underperform both raw files and qcow2 files, btw.

Would you mind sharing resources that supports this statement? I would be interested to know what sort of experimentation was done, if fio was used, what were the profiles.

Thanks in advance.