How do you visually differentiate disks in a bay?

Wondering about labelling my disks and knowing which disk is which.
If I have four disks and if one disk fails, how do I visually confirm which physical disk failed at a glance so I can replace it…?

  • Wondering if it would help to use a felt pen or sticker or labelled masking tape to differentiate the drives.

Currently using a repurposed tower. Thanks for any suggestions!

I use partition labels to create my zpools and in the label I include the disk’s physical location and its serial number.

OK, apparently comments don’t format …

Create a file /etc/zfs/vdev_id.conf with a line for each disk. It should be of the form

alias nice_name /dev/disk/by-path/<port

Part of mine has this

alias ab-1    /dev/disk/by-path/pci-0000:01:00.0-sas-phy0-lun-0
alias ab-2    /dev/disk/by-path/pci-0000:01:00.0-sas-phy1-lun-0
alias ab-3    /dev/disk/by-path/pci-0000:01:00.0-sas-phy2-lun-0
alias cd-1    /dev/disk/by-path/pci-0000:01:00.0-sas-phy3-lun-0
    :

zpool list is like

         tank            ONLINE
          raidz1-0     ONLINE       0     0     0
            ab-1         ONLINE       0     0     0
            ab-2         ONLINE       0     0     0
            ab-3         ONLINE       0     0     0
            cd-1         ONLINE       0     0     0
            cd-2         ONLINE       0     0     0

Make your nice_name anything you like. Then run sudo udevadm trigger. You will have to export the pool and re-import it like sudo zpool import -d /dev/disk/by-vdev tank

My solution is less elegant but doesn’t require editing any config files: I simply keep lists of my key hardware including S/Ns and an ID (HD161, HD162 in the example), and I write the ID in big letters on the drives where I can see it - PVE shows the S/N under node > Disks :slight_smile:


I asked a similar question not too long ago here.

Generally, people use fail lights (if available) or physical labels. A highly scalable option is to use the vdev_id rules to map drives to location-specific names. This is actually what OpenZFS recommends for large pools.