Running ZFS in a VM

I was wondering if it is OK to run ZFS inside a VM for just basic file storage, I have limited hardware and the KVM hypervisor has a couple hundred GBs I can cut up to a couple QCOW2 volumes so set up a basic zpool, but I wanted to ask before I started and caused more problems then doing a VM with a basic NFS share.

I think it depends on the configuration you are planning. I’m running ZFS in a VM with two HDDs passed through to the VM and formatted as a ZFS mirror.

It would probably help to know what you mean by “cut up to a couple QCOW2 volumes”. ZFS provides the most benefit when it manages the raw devices directly. There are situations when a ZFS volume can be created from a disk partition (like when a single drive host boots from an EXT4 partition and has another partition formatted using ZFS on the rest of the drive.)

So I don’t have any physical drives, so I was just going to create a couple of virtual drives and use ZFS that way if that’s possible and OK.

If by “virtual drives” you mean ZFS filesystems (AKA datasets) and use those to store the QCOW2 files, that should be fine. I’d look up any settings that may provide better performance for ZFS filesystems that host QCOW2 files, but that’s probably only going to matter if you want to eke out the last possible performance.

It is convenient to have a separate dataset for the QCOW2 files so you can tailor settings, snapshots and more easily back them up.

No so it would be a vm with 4 virtual hard drives, 1 os drive running zfs and 3 other drives as the zpool

I don’t understand what you mean by “4 virtual hard drives” so I cannot comment on it.

Ok let me try and explain a bit better, I have 1 box that i am running libvirt on the is running all my VMs, libvirt vms use qcow2 files as the virtual hard drive for the vm so qcow2 file => hard drive for VM i want to create a zfs vm with 3 qcow2 files (hard drives for the vm) just to store some basic files. All the virtual drives are just qcow2 files on the hypervisors ssd drive so their is no real falt tolerance but i want the scrub and snapshot features

Generally speaking, this is a bad idea. At a bare minimum, there’s no point in giving ZFS in a v,m multiple virtual drives unless the virtual drives are individually located on different physical drives.

More commonly, when putting ZFS inside a VM rather than outside, you pass the VM entire physical drives, so that ZFS is still as close as possible to the bare metal.

Understand, would the setup i described be ok for testing and learning ZFS and dealing with outages and failed drives?

Oh, certainly! But there are easier ways to manage testing. You don’t even need VMs for that, in a lot of cases.

Behold the majesty of the test pool created from sparse files:

root@elden:/tmp# for drive in {0..2} ; do truncate -s 20T disk$drive.raw ; done

We just created three 20TiB sparse files–if you’re not familiar with sparse files, they have the size attribute you specify (1GiB, in this case) but don’t actually reserve or use that space on disk until you put that much data in them. I don’t actually have 20TiB free on this pool, let alone 60TiB–but since these files are sparse, that doesn’t matter yet.

Now that we’ve got our test “drives”, let’s build a pool out of them:

root@elden:/tmp# zpool list demopool ; echo ; zpool status demopool
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
demopool  60.0T   196K  60.0T        -         -     0%     0%  1.00x    ONLINE  -

  pool: demopool
 state: ONLINE
config:

	NAME                STATE     READ WRITE CKSUM
	demopool            ONLINE       0     0     0
	  raidz1-0          ONLINE       0     0     0
	    /tmp/disk0.raw  ONLINE       0     0     0
	    /tmp/disk1.raw  ONLINE       0     0     0
	    /tmp/disk2.raw  ONLINE       0     0     0

errors: No known data errors

There it is–a “three drive”, “60TiB” RAIDz1 vdev created on a single 2TiB NVMe drive (my actual pool)! Now, I will warn you that not all filesystems will let you actually create a 20TiB sparse file–if you’re getting ugly errors, try 20GiB instead, and just convert the GiB to TiB in your head if it’s important to you to see the sizes while testing.

Once you’re done, clean up your toys:

root@elden:/tmp# zpool destroy demopool
root@elden:/tmp# rm /tmp/disk*.raw

Pretty slick, yeah? Basically, there just isn’t much reason to faff around with fake “disks” in virt when you’re trying to practice ZFS failover, because you can practice it without even needing virt in the first place.

But if you do really want to do it with “real fake disks” in the form of virtual disks attached to a VM, that’s fine too; just don’t expect it to be real-world useful as opposed to “a way I can practice breakfixes.” :slight_smile:

I’ve used files to explore ZFS also. I’ve scripted some examples and put them up at https://github.com/HankB/Fun-with-ZFS