My ZFS is exclusively for block (excepting the local OS which runs nothing significant).
All ZVOL + iSCSI over 10Gb network. 4x slow rust and a small Optane LOG + CACHE.
The storage is used thusly:
-
VMware + Microsoft lab: General purpose VMs with no substantial data ingestion/egress. Not fully static but also not very dynamic. Mostly the same VMs fired-up over and over.
-
Block storage for a Windows desktop w/ NTFS-formatted volumes. I have an AI lab of sorts hosted on iSCSI to leverage ZFS snapshotting, data integrity, etc.
The VMware setup loves l2arc_mfuonly
. Repeatedly-accessed blocks rapidly become mfu thus CACHE resident. L2ARC persistence is a game-changer.
IMHO those who interact with VMs on a small ZFS rust pool (therefore they feel the nasty storage latency) ought to have a modicum of L2ARC to cover booting their VMs. I can boot a VM and watch a nice string of 100s in the l2hit%
column.
The desktop access patterns are very different. Probably 95% read and 95% of said reads are long sequential (big block) file access. Each file is 6-8GB and gets read perhaps 10-15x per workday. Maybe 10-15 of these will be read on any given workday.
My CACHE can readily saturate the 10Gb network. My rusty shitpool couldn’t saturate two tin cans and a string.
l2arc_mfuonly
appears to do exactly with it says on the tin – It prevents the desktop’s I/O from purging CACHE. My problem is I want to let this stuff into cache and not have it blow out all of my VM blocks. Which is exactly what happens with l2arc_mfuonly
turned off.
tl;dr:
l2arc_mfuonly=1
gives CACHE to my VMs at the desktop’s expense.
l2arc_mfuonly=0
(Miley-mode) turns my desktop into an mru wrecking ball.
I don’t see any helpful-looking knobs for this. Is the prescription more cowbellL2ARC?