For most of my workloads, not populating the ARC would make numbers go down, regardless of the initial speed of the on-disk read.
The default behavior in OpenZFS 2.3 is “direct=standard” where it will handle recordsize-aligned IO through the Direct IO path - unaligned IO will still take the buffered path through the ARC.
I swear I’ve never had to think so much about how my filesystem worked as I have since I started using ZFS–not even 400 years ago when I had to regularly defrag my 68k Mac’s disk. (I don’t miss having to manage SCSI IDs and termination, either…)
Any suggestions on how to understand whether I/O is aligned or unaligned to recordsize? I use recordsize=1M
unless a specific workload (e.g., databases, VMs, etc.) wants something different.
Elsenet, a lot of people love to flame newbies for over-optimization and overcomplicating something that’s just a filesystem, but. ZFS finds new ways to complicate my life with recordsize
and volblocksize
weekly.
Update: per Friday, February 7, 2025’s iX podcast episode ( https://www.youtube.com/watch?v=nTU6Xechrk0 ) , DirectIO is disabled in the upcoming TrueNAS CE 25.04 Fangtooth beta (and probably through the entire release cycle). See the 2:00 minute mark in the video.
TrueNAS uses systemd extensions to lock parts of the filesystem/OS to be read-only, and then add extensions on top of it for drivers, which is apparently excellent for system stability and robustness and auditing when properly configured. (I’d never heard of this and don’t fully understand it yet beyond some parts of the OS-on-disk being read only.)
Something about the way this works and ZFS not being built into the kernel means that Direct IO conflicts with some other extensions that iX is introducing and causes kernel panics.
No one has told their marketing team, apparently, which just sent out an email blast today advertising DirectIO as an upcoming Fangtooth feature.