I created a pool named zvol which in retrospect was not the best idea so I’d like to change it.
The pool is not in use and I can successfully export it, and import it with a new name like so
zpool import zvol pool41a
and ‘zpool list’ shows all my pools as expected.
I then run
zpool set cachefile=/etc/zfs/zpool.cache pool41a
and all looks good.
When I reboot the computer the pool imports as zvol
I thought I understood the import cachefile mechanism but I feel like I’m missing something here.
I’d appreciate any insight into what I’m doing wrong?
Wild. IDK what to make of that; I’ve renamed hundreds of pools and never seen a persistence issue in doing so. You export the pool, you import it to a different name, you’re done. I don’t even mess with the cache file, just
zpool export oldname ; zpool import oldname newname is sufficient.
What operating system, distro, and ZFS version are you running, please?
You may be getting bitten by an old cache file. If this system was upgraded from FreeBSD 12.x to 13.x, there may be an old cache file in
/boot/zfs/zpool.cache that still has the old name.
You can just delete the copy in
/boot/zfs and it should solve the problem.
If that doesn’t work, I’d suggest using
zpool reguid newname after having done the rename. This will assign the pool a new GUID and rewrite all of its labels. This will invalidate any wrong cache that might be persisting from somewhere you don’t realize.
Been working on this today and I have it working. But I have more questions than when I started.
The key appears to be that my root filesystem is also on zfs, although a different pool (zroot). The solution was to rebuild my boot image after the reimport with the new name using mkinitcpio. This achieved the desired aim of importing all the pools with the correct name.
At one point while I was working on this I disabled both the zfs-import-cache.service and zfs-import-scan.service as well as disabling the zfs.target completely. The system still booted and imported all the pools as well as mounted all the filesystems. This surprised me. I think it did this at the same time as zroot was imported. It seems like the only service that would do anything in this scenario is zfs-zed.service which might be the only reason to enable zfs.target
I guess the boot image lists and includes ALL the pools present in zpool.cache when mkinitcpio -P is run. I would have thought it would only include the pools with the bootfs property set but that does not appear to be the case.
Does this mean that you don’t need to run zfs-import-cache.service or zfs-import-scan.service?
I re-read the arch wiki pages for both ZFS and Installing Arch on ZFS but it don’t really get into this level of detail. The wiki only discusses importing pools in the generic ZFS page and not specifically when booting from ZFS as well but it seems to be different.
I have more questions then when I started but it’s working so all in all a good day. I’m definitely not sure I’m understanding this completely so any corrections, thoughts, and discussion are welcome.
I appreciate everyone taking time to look at this.
Please show us the output of these two commands:
zfs get mountpoint -t filesystem