Several months ago it was necessary to remove either my SLOG or special small blocks vdev (I can’t remember which). I re-created it on different hardware, it went smoothly without drama. Since then my pool status forever commemorates that occasion with the addition of a “remove:” paragraph:
pool: tank
state: ONLINE
scan: scrub repaired 0B in 03:49:43 with 0 errors on Sun Jun 9 04:13:45 2024
remove: Removal of vdev 5 copied 166M in 0h0m, completed on Fri Nov 24 22:06:28 2023
51.8K memory used for removed device mappings
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
d5sZC0L ONLINE 0 0 0
d4sL4ZM ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
d0sDT6F ONLINE 0 0 0
d3sLDGC ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
d1sTLCC ONLINE 0 0 0
d2sLQCT ONLINE 0 0 0
special
mirror-6 ONLINE 0 0 0
d6s68C4 ONLINE 0 0 0
d7s68AF ONLINE 0 0 0
logs
nvme0n1 ONLINE 0 0 0
errors: No known data errors
I have tried zpool clear
and the server has been rebooted a few times since the change occured but the statement remains. Is there a way to clear it? Does it provide some value to me that I’m not considering?
Thanks.
Not that I’m aware of.
It lets you know both that you’ve got a remapping table in play, and how much RAM that table occupies. Which isn’t much of an issue with your removal that only eats 52KiB or so, but if you removed a couple more vdevs that had more data on them and wound up with a remapping table much larger… then you’d want to know about that, and just as importantly, anybody else who inherited the system from you / you called in for support on the system would definitely want to know about that.
The thing is, vdev removal is not entirely clean. You don’t get exactly the pool that you’d have had if you’d never added the missing vdev in the first place; you’re left with a table that intercepts calls made to the blocks at their original location with pointers to their new location. That table is with you essentially forever after you’ve removed a vdev. Again, not a big deal if you never had more than a few hundred MiB on the removed vdev, but can become a very big deal if somebody tries something a bit bolder like “let me remove this vdev that’s been part of my pool for the last year and has several TiB on it.”
2 Likes
For some reason, I did not get this warning when I removed a 4tb disk from a 3 way mirror a year ago with the pool 80% full. What could be going on there? The pool in question is still running.
Mirrors are a bit different than special vdevs or RAIDZ.
I see. I did get that message when removing entire mirror vdev from a pool. Would the required space for the remappings go away as data is modified / old snapshots removed or how does this work?
I did this a few years back, removing ~2.88T of data. Even with that, the table is only 5MB:
remove: Removal of vdev 0 copied 2.88T in 6h47m, completed on Sun Dec 26 04:06:47 2021
5.41M memory used for removed device mappings
Would this value go down if the data that was moved is removed from the pool? I wonder if it would go away if the memory value gets to zero.
You’re very unlikely to ever get rid of ALL the affected data, IMO.