Running U24.04. I’m experiencing audio glitching when my filesystem does syncs. I’m running a 9950X with 128GB RAM. When doing somewhat meager things like building a python venv, suddenly all the CPUs light up and it glitches all sound from Firefox. Compression is set to zstd.
I’ve looked through ZFS doesn't respect Linux kernel CPU isolation mechanisms · Issue #8908 · openzfs/zfs · GitHub and that does not appear to have any working advice. Is there anything else I should consider?
Zstd is a LOT heavier weight than it’s been marketed as, IME, IMO. I’d recommend trying lz4 instead.
2 Likes
I wouldn’t yet call zstd production-ready:
opened 08:24AM - 19 May 21 UTC
Type: Defect
Status: Triage Needed
Type | Version/Name
--- | ---
Distribution Name | vpsAdminOS/NixOS
Distribut… ion Version | master
Linux Kernel | 5.10.25+ (running .37 currently)
Architecture | x86_64
ZFS Version | master HEAD as of 18.05.2021
SPL Version |
### Describe the problem you're observing
```
[ 8108.223833] PANIC: zfs: accessing past end of object 1c5a0/13707b (size=149504 access=149399+8191)
[ 8108.224008] Showing stack for process 429599
[ 8108.224023] CPU: 23 PID: 14578 Comm: statistics_coll Tainted: G OE 5.10.37 #1-vpsAdminOS
[ 8108.224030] Hardware name: Dell Inc. PowerEdge R620/01W23F, BIOS 2.8.0 06/26/2019
[ 8108.224037] In memory cgroup /osctl/pool.tank/group.default/user.1177/ct.9795/user-owned/lxc.payload.9795
[ 8108.224058] Call Trace:
[ 8108.224084] dump_stack+0x6d/0x88
[ 8108.224104] vcmn_err.cold+0x58/0x80 [spl]
[ 8108.224122] ? _cond_resched+0x15/0x30
[ 8108.224132] ? _cond_resched+0x15/0x30
[ 8108.224143] ? mutex_lock+0xe/0x30
[ 8108.224153] ? _cond_resched+0x15/0x30
[ 8108.224165] ? mutex_lock+0xe/0x30
[ 8108.224245] ? aggsum_add+0x171/0x190 [zfs]
[ 8108.224302] ? _cond_resched+0x15/0x30
[ 8108.224313] ? mutex_lock+0xe/0x30
[ 8108.224323] ? _cond_resched+0x15/0x30
[ 8108.224332] ? mutex_lock+0xe/0x30
[ 8108.224394] ? dbuf_find+0x1af/0x1c0 [zfs]
[ 8108.224516] ? dbuf_rele_and_unlock+0x134/0x660 [zfs]
[ 8108.224633] ? arc_buf_access+0x104/0x250 [zfs]
[ 8108.224771] zfs_panic_recover+0x6f/0x90 [zfs]
[ 8108.224909] dmu_buf_hold_array_by_dnode+0x219/0x520 [zfs]
[ 8108.225006] ? dnode_hold_impl+0x348/0xc20 [zfs]
[ 8108.225103] dmu_write_uio_dnode+0x4c/0x140 [zfs]
[ 8108.225198] dmu_write_uio_dbuf+0x4a/0x70 [zfs]
[ 8108.225298] zfs_write+0x48c/0xc70 [zfs]
[ 8108.225367] ? aa_put_buffer.part.0+0x15/0x50
[ 8108.225414] zpl_iter_write+0x105/0x190 [zfs]
[ 8108.225471] do_iter_readv_writev+0x157/0x1b0
[ 8108.225479] do_iter_write+0x7d/0x1b0
[ 8108.225486] vfs_writev+0x83/0x140
[ 8108.225497] do_writev+0x6b/0x110
[ 8108.225508] do_syscall_64+0x33/0x40
[ 8108.225517] entry_SYSCALL_64_after_hwframe+0x44/0xa9
```
### Describe how to reproduce the problem
If I only knew... the process is run by a user in a container, every night at approx 3:00 am :( Still investigating. Ideas for bpftrace commands/etc. which could catch more information are welcome :)