Meaning of "(non-allocating)" in "zpool status"?

Greetings everyone. I’ve been using ZFS for a few years, but just noticed in the output of zpool status the words (non-allocating). It’s possible the words have always been there and I just never noticed. I am wondering what they mean.

Results online regarding the meaning of this seem to be pretty sparse, aside from a question on Reddit’s r/Openzfs, which didn’t seem to gain a lot of traction:

Question on r/Openzfs

Can anyone shed any light as to the meaning?


I don’t think there’s any sensitive data in the output of this command besides maybe some drive number information, so I will paste it below here (redacted the drive serials/numbers/IDs):

$ sudo zpool status
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 05:47:02 with 0 errors on Mon Oct 30 05:47:04 2023
config:

        NAME                                          STATE     READ WRITE CKSUM
        zroot                                         ONLINE       0     0     0
          mirror-0                                    ONLINE       0     0     0
            ata-WDC_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  ONLINE       0     0     0  (non-allocating)
            ata-WDC_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  ONLINE       0     0     0  (non-allocating)
          mirror-1                                    ONLINE       0     0     0
            ata-WDC_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  ONLINE       0     0     0  (non-allocating)
            ata-WDC_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  ONLINE       0     0     0  (non-allocating)

errors: No known data errors

And, in case it gives any hints, the output of zpool list -v:

$ sudo zpool list -v
NAME                                           SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot                                         7.25T  3.47T  3.78T        -         -    19%    47%  1.00x    ONLINE  -
  mirror-0                                    3.62T  1.74T  1.89T        -         -    19%  47.9%      -    ONLINE
    ata-WDC_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx      -      -      -        -         -      -      -      -    ONLINE
    ata-WDC_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx      -      -      -        -         -      -      -      -    ONLINE
  mirror-1                                    3.62T  1.74T  1.89T        -         -    19%  47.9%      -    ONLINE
    ata-WDC_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx      -      -      -        -         -      -      -      -    ONLINE
    ata-WDC_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx      -      -      -        -         -      -      -      -    ONLINE

Thank you all.

1 Like

Pretty sure you’ve encountered a bug. Logically, “non-allocating” would mean “you can’t write to it” and that appears to be the case from grepping through the zfs codebase:

if (txg == 0)
spa_config_enter(spa, SCL_ALLOC, FTAG, RW_WRITER);

/*
* If the vdev is marked as non-allocating then don’t
* activate the metaslabs since we want to ensure that
* no allocations are performed on this device.
/
if (vd->vdev_noalloc) {
/
track non-allocating vdev space */
spa->spa_nonallocating_dspace += spa_deflate(spa) ?
vd->vdev_stat.vs_dspace : vd->vdev_stat.vs_space;
} else if (!expanding) {
metaslab_group_activate(vd->vdev_mg);
if (vd->vdev_log_mg != NULL)
metaslab_group_activate(vd->vdev_log_mg);
}

From this, it certainly appears to follow as though your pool has erroneously marked every vdev as non-allocatable, with uncertain effects. I’d strongly advise opening a bug report (and ideally, linking it here afterwards).

1 Like

Jim is correct here, non-allocating is a new property of a vdev, designed for queueing up device removals. It allows you to make sure that removing device A, that the data isn’t moved to device B, which you plan to remove as soon as you finish removing device A. So that instead, all of the data goes to device C/D/E etc.

I would check that your zfs userspace and kernel code are the same version.

You can learn more about this feature here: https://www.youtube.com/watch?v=Yy8fhyrBV-Y

1 Like

Just ran into this myself, I had upgraded to proxmox’s opt-in testing kernel, but hadn’t rebooted yet.

Before reboot:

root@pve:~# uname -r
6.2.16-10-pve

root@pve:~# zfs version
zfs-2.2.0-pve1
zfs-kmod-2.1.12-pve1

root@pve:~# zpool status
  pool: npool
 state: ONLINE
  scan: scrub repaired 0B in 00:35:48 with 0 errors on Wed Nov  1 21:55:18 2023
config:

        NAME                                             STATE     READ WRITE CKSUM
        npool                                            ONLINE       0     0     0
          mirror-0                                       ONLINE       0     0     0
            nvme-INTEL_SSDPF2KX076TZ_PHAC140402KB7P6CGN  ONLINE       0     0     0  (non-allocating)
            nvme-INTEL_SSDPF2KX076TZ_PHAC140402AN7P6CGN  ONLINE       0     0     0  (non-allocating)
          mirror-1                                       ONLINE       0     0     0
            nvme-INTEL_SSDPF2KX076TZ_PHAC1404029K7P6CGN  ONLINE       0     0     0  (non-allocating)
            nvme-INTEL_SSDPF2KX076TZ_PHAC1404029J7P6CGN  ONLINE       0     0     0  (non-allocating)

After reboot:

root@pve:~# uname -r
6.5.3-1-pve

root@pve:~# zfs version
zfs-2.2.0-pve1
zfs-kmod-2.2.0-pve1

root@pve:~# zpool status
  pool: npool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:35:48 with 0 errors on Wed Nov  1 21:55:18 2023
config:

        NAME                                             STATE     READ WRITE CKSUM
        npool                                            ONLINE       0     0     0
          mirror-0                                       ONLINE       0     0     0
            nvme-INTEL_SSDPF2KX076TZ_PHAC140402KB7P6CGN  ONLINE       0     0     0
            nvme-INTEL_SSDPF2KX076TZ_PHAC140402AN7P6CGN  ONLINE       0     0     0
          mirror-1                                       ONLINE       0     0     0
            nvme-INTEL_SSDPF2KX076TZ_PHAC1404029K7P6CGN  ONLINE       0     0     0
            nvme-INTEL_SSDPF2KX076TZ_PHAC1404029J7P6CGN  ONLINE       0     0     0