Zpool status -v ends with:
errors: Permanent errors have been detected in the following files:
<0xb38a>:<0x80>
Is there a way to clear this? I believe it’s pointing into a deleted snapshot.
Zpool status -v ends with:
errors: Permanent errors have been detected in the following files:
<0xb38a>:<0x80>
Is there a way to clear this? I believe it’s pointing into a deleted snapshot.
If it referred only to a block in a snapshot and that snapshot was destroyed, it would clear after a scrub. Most likely, it’s referring to some metadata block somewhere, and until you try to ls the dir or stat the file that the metadata block points to, you won’t directly see the impact of it.
The only way I know of to fix the error for sure is to blow the pool away and restore from backup. With that said, you might be able to get some mileage out of the zfs_send_corrupt_data tunable, see here: Allow sending corrupt snapshots even if metadata is corrupted by allanjude · Pull Request #12541 · openzfs/zfs · GitHub
On FreeBSD, sysctl vfs.zfs.send.corrupt_data=1
or on Linux, echo 1 > /sys/modules/zfs/parameters/zfs_send_corrupt_data
and then try the send again. What it should definitely do is allow you to replicate; what it might or might not do, I’m not sure, is give you a “neutered” version of the corrupt data on the other side, that no longer triggers CKSUM errors.
Again, I’m not certain that forcing a send will “fix” your issue, but it’s worth giving it a shot I think.
I’ve had a number of similar errors over time on a few of my pools. Most have been harmless (in terms of actual data retrieval) and they’ve been somewhere in the 2.0-2.1 versions. I think after I’ve moved to 2.2, they’re (I think) mostly gone and things are quite clean now. While I didn’t have data issues, I was still quite picky/finicky about not having ANY errors show. I was able to get rid of them after 2 scrubs (a single scrub didn’t always help clear things). Sometimes a scrub and a reboot helped but I think 2 scrubs (and may be a reboot) is pretty much the worst I had to do before things were really clean.
Again, just to be sure, I didn’t have any data issues (or rather still don’t know about any data issues - everything I need just “works”).
I had similar issues in the early 2.0 versions that have gone away after ~ 2.2.4 or thereabouts. Two scrubs and a reboot always seemed to fix it.
I seem to remember reading somewhere that it was related to ZFS native encryption?
Thanks for the replies! I did run another scrub, and that error went away. There is a single checksum error marked in zpool status -v.