'New' disk gone missing after reboot - how to get it back in there?

I have a 6-disk raidz2 pool, where one of the disks got bad and was replaced. This was back in April.
Ubuntu 22.04 LTS system, running zfs-0.8.3-1ubuntu12.15. At the time of replacing the disk, I was probably running zfs-0.8.31ubuntu.12.14, so a very minor update ago.

What I did at that time was a zpool replace -o ashift=12 vault vault3 vault3-new and everything was nice.

Fast forward to about a week ago, where I reboot the system to get a new kernel (I swear I’ve reboot the system between April and now, but nevermind), and now the replaced disk is “missing” and it’s trying (unsuccessfully) to jam in the old disk:

  pool: vault
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: scrub repaired 0B in 0 days 03:19:13 with 0 errors on Mon Oct  9 14:28:34 2023
config:

        NAME                     STATE     READ WRITE CKSUM
        vault                    DEGRADED     0     0     0
          raidz2-0               DEGRADED     0     0     0
            vault0               ONLINE       0     0     0
            vault1               ONLINE       0     0     0
            vault2               ONLINE       0     0     0
            5008721428915513263  FAULTED      0     0     0  was /dev/disk/by-label/vault
            vault4               ONLINE       0     0     0
            vault5               ONLINE       0     0     0

errors: No known data errors

The Numeric ID is the ID of the old disk. When I try to replace it again, it refuses:

invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-label/vault3 is part of active pool 'vault'

Is it all fine to stick it the -f force option or should I try otherwise first?