After rebooting an Ubuntu server a zpool (for data only; OS is EXT4), it showed that two drives (SAS) were FAULTED:
# zpool status
pool: zroot
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: scrub in progress since Thu Aug 7 21:02:11 2025
564G scanned at 1.04G/s, 149G issued at 282M/s, 763G total
0B repaired, 19.57% done, 00:37:11 to go
config:
NAME STATE READ WRITE CKSUM
zroot DEGRADED 0 0 0
raidz3-0 DEGRADED 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
scsi-35000039878613d45 ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
4044550924801993745 FAULTED 0 0 0 was /dev/sdf1
6246812777007092846 FAULTED 0 0 0 was /dev/sdg1
sdh ONLINE 0 0 0
My best estimate was that the device names for the 6th and 7th drives switched on reboot, causing ZFS to get mixed up. I tried every which way to use zpool commands (replace, online, etc.) to integrate them back into the pool.
On a lark, I removed them and reseated them. The pool immediately began resilvering. Not sure what happened.
But if you notice the 3rd drive has its scsi-id as the identifier, while everything else is by device name. This is from a previous drive replacement.
To avoid the possibility of device names switching around, to the peril of my zpool, I would like to one-by-one, remove each drive from the pool and replace it by scsi-id instead of device name.
Would someone kindly give me the proper commands/syntax to do that?