By my own fault I have a very unhealthy seven year old ZFS-pool. Status as of now is this:
root@truenas[~]# zpool status -x [34/307]
pool: frankenpool
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: resilvered 183G in 1 days 05:27:19 with 9790 errors on Sun May 25 05:20:36 2025
config:
NAME STATE READ WRITE CKSUM
frankenpool DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
gptid/db0de007-cba8-11e5-84fb-d43d7ebc5ce0 DEGRADED 0 0 53.0K too many errors
gptid/5d3a0859-adc2-11e5-8401-d43d7ebc5ce0 DEGRADED 0 0 53.2K too many errors
gptid/269386ab-7d8f-11e6-9331-d43d7ebc5ce0 DEGRADED 0 0 54.6K too many errors
gptid/cdde06fd-f2d6-11e6-b1be-d43d7ebc5ce0 DEGRADED 0 0 27.1K too many errors
14042028026288025165 UNAVAIL 0 0 0 was /dev/gptid/636876a9-adc2-11e5-8401-d43d7ebc5ce0
gptid/b8e22b15-881e-11e6-9163-d43d7ebc5ce0 DEGRADED 45 0 104K too many errors
gptid/08054759-f2d6-11e6-b1be-d43d7ebc5ce0 ONLINE 0 0 290
13049790948581386619 UNAVAIL 0 0 0 was /dev/gptid/9135367f-7dca-11e6-9b67-d43d7ebc5ce0
gptid/6c0bbc77-adc2-11e5-8401-d43d7ebc5ce0 DEGRADED 0 0 53.0K too many errors
errors: 9803 data errors, use '-v' for a list
Of course no backup exist so I’ll try to recover as much as possible. How you ask? Stay tuned!
So if this is of any interest I’ll post background and progress. Otherwise I’ll just delete this so the forum does not get polluted.