Expanding pool, should I change to gptid?

I’m adding larger drives to my pool, and as this is the first time I’m doing this, I decide to do some research before actually doing it.

I believe I created this pool via the command line, instead of using the FreeNAS GUI, and it uses the geom ID and not the gptid.

  pool: Volume1
 state: ONLINE
  scan: scrub repaired 0 in 0 days 06:50:04 with 0 errors on Tue Oct 31 06:50:19 2023
config:

        NAME        STATE     READ WRITE CKSUM
        Volume1     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            ada2    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada5    ONLINE       0     0     0

I found different opinions on using a label vs the geom id. As far as identifying the drive, I usually label my drives during install, so I can easily check the SN if I need to replace it in the future.

I found a post saying that it’s better to use partitions because disk sizes are not perfect, but partitions are:

https://freebsd-questions.freebsd.narkive.com/XdKaE3zG/zfs-gptids-and-using-the-whole-disk

But I find it hard to believe that ZFS would not be able to handle a small difference on disk geometry. I always keep the same model of drives in a pool. The only factor I can think of would be replacing a faulty drive in a few years, were physical changes (upgrades, etc.) might have been added to that model and that difference might be higher.

On the other side, Oracle’s page recommends using the entire disk (but I’m assuming this is for specific for Solaris):

The recommended mode of operation is to use an entire disk, in which case the disk does not require special formatting

So my question is, as I’m replacing these drives, should I also change it to use a gpt partition instead?

I’m also wondering if I replace the drives using the FreeNAS GUI instead of the command line, if it will create the gptid (as per this post, “dont see /dev/disk/by-id”, FreeNAS uses gptid behind the scenes).

Thanks!

Linux user here with practically zero experience with BSD.

When I create pools using partitions, zpool create uses the partitions as provided. When providing entire disks, it first partitions the disk and creates an extra partition to waste a little space should it be desirable to replace with a slightly smaller drive in the future. For example:

hbarta@oak:~$ sudo sgdisk -p /dev/sda
Disk /dev/sda: 7814037168 sectors, 3.6 TiB
Model: TOSHIBA MG04ACA4
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): A768A9BE-2C1E-6545-A2B6-79392C62B98A
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 3693 sectors (1.8 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048      7814019071   3.6 TiB     BF01  zfs-6e0fa72731c80b47
   9      7814019072      7814035455   8.0 MiB     BF07  
hbarta@oak:~$ 

I don’t know if whole disk handling is the same on BSD. I generally don’t use information from the Oracle docs as I don’t know how much Oracle ZFS has diverged from OpenZFS. (I also create the pools with ashift=12 and would be surprised if this particular drive really used 512 byte sectors vs. just lying to the OS so as not to confuse older Windows versions.)

Edit: As far as identifying the drives, ZFS can scan the drives and figure out what drives belong to which pool but I always prefer to use an unambiguous identifier just in case…

There’s a fantastic pair of books on ZFS by Michael Lucas and Allan Jude and unless I’m completely misremembering, they recommend partitioning. One reason I like to do that is because I can use the partition labels. I have a simple bash script that will do the partitioning and add a label that includes the disk’s serial number and the bay it’s physically located in. That way when it dies, I don’t have to go searching for documentation; it’s right there in the label.

Replying to my own answer… I created a TrueNAS VM and ran some tests on it. If I try to replace a new drive with the same size in the UI it fails. I believe because it’s trying to create the GPT partitions, and it ends up with less space.

If I add a bigger drive however, it goes through and creates the GPT partition: