[TrueNAS] Workload - Home Server MariaDB and MySQL Servers Storing Data on ZFS NAS: Is there any Benefit to using iSCSI Instead of NFS?

This is a quasi-followup to my earlier post, here: [Sanity Check] MariaDB VM on ZFS-Based Host Storage - Optimizing Volblocksize/Recordsize for MariaDB? (Am I doing this right?)

Now that I have an NVME mirror pool up and running, I’d like to move the storage for my MariaDB (and later, a MySQL) server VM from the zVol-backed virtual disks that live on my Proxmox node to the NVME pool on my NAS. Having production data living on virtual disks inside the VM is making me twitchy. :stuck_out_tongue: (It also ends up sending [encrypted] personal data from my database to my Proxmox cloud backup provider, which isn’t ideal if only because of how much space it’ll use.)

In either case, the file server is TrueNAS Scale 24.10.2.2. The NAS will use a 10 Gbps storage VLAN to communicate with the database server VM.

For home/home office/hobbyist production, is there any benefit to using a zVol with iSCSI to mount the database data in the VM? It’s so much easier to just use NFS. I’ve actually done it before on an older test system. I also like being able to access the database filesystem from the NAS itself if necessary.

I know iSCSI is potentially faster, but is it really worth it for this use? I don’t need to use iSCSI just so I can learn about it–I have another, non-production project in mind for teaching myself iSCSI. :slight_smile:

Thanks!

Probably not. If you find iSCSI intimidating in this context, I’d start out on NFS and just switch it up later if I had a performance issue under NFS.

2 Likes

Thanks! I’ll go with NFS for this, then.

iSCSI is definitely one of those things that feels like it won’t be a problem once I get some experience with it, but will be a distraction until then while I’m trying to do other things.

Like, I’m still not sure I understand the potential negative implications of using ZFS inside a VM that lives on a ZFS storage pool in Proxmox … which seems like it might be a problem if I wanted to use iSCSI with ZFS inside a VM. OTOH, I also do not yet understand the implications of using ext4 with an iSCSI share inside the VM when the VM lives on a ZFS store in Proxmox. (I think using ext4 is likely the simpler, easier option, but I’m only guessing and also assuming I’d still have the benefit of ZFS backups and snapshots on the host/NAS if I set everything up right.)

That feels like a rabbit hole with at least one rabbit hole inside it, and I know myself well enough to know it’d be a distraction while I was trying to get my (small) production database up and running.

I’m much more confident I know what I’m doing with NFS for this.

I’m saving learning iSCSI for my Windows 11 gaming VM. I’ve got a copy of eXoDOS, which contains All the DOS Games, as well as a ton of manuals and other materials. It needs about 1 TB of storage, and I don’t think the emulators will be happy trying to run games over SMB or NFS. iSCSI it is. :slight_smile:

Let us know when you move forward with iSCSI for the Windows box. I’ve spent many hours fooling around with this and I think I’ve pretty much seen it all by now…

I run both at work and I don’t notice any real world performance differences, but there’s one important thing to know if you run a lot of LXC containers (which I do). NFS data stores don’t allow snapshotting LXCs. VMs using qcow2 can use that format’s built-in snapshot capabilities, but LXCs are raw images on NFS storage.

If I remember correctly, the built-in iSCSI data stores are kind of weird, so I set up iSCSI manually, created a zpool on top of the device in Proxmox and added it as a regular ZFS data store in the UI.

1 Like

Thanks for your message. :slight_smile: I was completely unaware of the caveats with NFS and LXC containers.

I’ve got MariaDB in a VM right now–given my inexperience it was just more understandable to do it that way on my first go–but I’m still working out what to do for permanent storage of the database data. I’m confident in my ability to set up an NFS share against a ZFS dataset in TrueNAS, but OTOH my gut is telling me that iSCSI is the better long term solution, even if it’s harder to set up now. Using NFS introduces the NFS stack as a whole separate vector for potential issues.

(Though, realistically, for my little database that runs local web apps and minecraft server managers, I would probably be fine.)

I prefer to use ext4 as the virtual disk format inside my VMs, since the actual storage (Proxmox or TrueNAS) handles the backup of the real data via ZFS/PBS. I can also keep VM RAM/vCPU usage lower if I’m not running ZFS inside the VM. I’m not yet clear on whether there are any real negative implications for using an ext4 formatted zVol block device stored on a ZFS pool as a virtual disk in a VM. I don’t think it’s an issue for home use. But. We’ll see, I guess?

Using TrueNAS as ZFS backing storage for VMs and LXCs is definitely, out of the box, harder than it should be. TrueNAS doesn’t support the iSCSI protocols (?) that Proxmox implements. There’s a third party plugin for TrueNAS SCALE (24.10) that enables using it as a target storage for Proxmox’s ZFS over iSCSI, but that doesn’t work with TrueNAS 25.04 and it’s not certain it can be patched.

TrueNAS’s dev team has ruled out implementing the iSCSI protocol(s) Proxmox uses due to not wanting to completely have to reimplement iSCSI support on their end of things. Apparently, saying this would be a breaking change is putting it mildly.

Regular iSCSI, where each disk is just mounted by an initiator on the Proxmox side and added to the VM, is much easier to set up on the TrueNAS side, but much more complicated to manage on the Proxmox side of things.

I’d be interested in how you set up your iSCSI, if you can find documentation. :slight_smile: