I just wanted to run my RAM usage by you all to see if it is expected behaviour, I have a machine with 64GB of RAM with 3 4TB hard drives and 2 1TB SSD running two VM’s, one with 6GB of ram and one with 20GB of ram, however, it is still using 2GB of swap. All of the ram is used according to htop (1/3 of this is yellow, assuming this is the zfs cache) and I was just wondering if it using swap was normal behaviour or if you would allocate your ram differently.
Any help would be greatly appreciated, I have received so much guidance from you all in the past and it has made my journey with ZFS so much better.
htop: if you have a recent htop version (3.3+ I think) you can add zfs arc usage to your htop display. You can sort by VIRT(descending) and see what is mostly likely to page out virtual memory first, and compare that to RES(descending) to see what is filling RAM.
free -m : for an understanding of what other caches and buffers
arc_summary : good overview of how big and full your ARC is
It is useful to determine what programs, and your desktop environment are using memory for, vs your ARC usage. It is possible to reduce the maximum ARC size by setting the sysctls for zfs.arc.min and zfs.arc.max.
Thanks for the help, the ARC is using 32 GB of ram and their appears to be some free space in the RAM, however, the swap file is still growing and it is up to 3ish GB. My only real concern is if this is something that I should be concerned about?
It is running Ubuntu server with the Gnome DE installed, Virt Manager, Timeshift, ZFS and Sanoid and the two virtual machines. It really doesn’t do anything but just run the virtual machines.
I am doing something similar on my server(s) and I do see lower swap usage than you. For instance, my main server is running Debian 12, has 64GB of RAM, and three separate ZFS pools across a mix of rust and SSD.
I am using nearly all of my RAM, but only 18GB is ‘active’ for my processes with the remainder being ZFS. I have a 2GB swap space built, but am only using 31MB.
My system is rebooted weekly, which is likely a factor here but this machine is running 11 containers, via Podman, and 6 VMs, each using up to 2GB of RAM each.
My ‘swapiness’ level is the default for Debian at 60 - perhaps yours is different?
You could do a little digging around with swapiness perhaps to see?
But, if it’s running well I’d not be worried about it…
Out of the box, OpenZFS will use up to 1/2 your physical RAM for ARC. That accounts for 32GiB, plus the 26GiB for your two VMs–58GiB of your total 64GiB.
Typically, 6GiB will be plenty for the host. But if you’re running a full desktop on it and potentially doing some web browsing and so forth, you can eat into that 6GiB awfully quickly without realizing it.
Even without that, in my opinion and experience Ubuntu is way too quick to try to page things out to swap. In theory, swap is helpful even on a system with more RAM than it needs, because when applications overcommit memory that they’ll never actually use, those overcommits can be allocated against swap rather than against system RAM, which in turn frees system RAM from having to purge perfectly good filesystem cache in order to make room for overcommits that applications will never actually write to.
But in practice, i’ve seen the kernel decide to page out–for example–the working RAM of GIMP, leaving me with lag from seconds to even minutes when I tab back into that application and it needs to restore the paged-out memory to real RAM, even when there’s plenty of actually free memory available!
You can limit how stupid the kernel gets with this using the vm.swappiness tunable–which I generally set to zero–but it may still occasionally get obnoxious about committing things to swap that actually matter.
In those cases, you can disable swap entirely–but remember, no swap means if an application tries to commit RAM that isn’t there, it crashes instead of just running slowly.