Raidz2 size with odd sized drives

I am now on phase two of shifting my data around and just created a 6 disk Raidz2 out of 1 x 1 TB, 1 x 2 TB, 2 x 3 TB, and 2 x 4 TB drives.

I was expecting to get 3.6 TB pool size, but it comes out 5.45 TB?

That is odd. But you didn’t show us any commands and their outputs, so all anybody can say is “that is odd.” :slight_smile:

I know - sorry :slight_smile: Maybe I should not have done it so late:

To create a similar startpoint I have destroyed the pool zpool destroy fatagnus - I don’t know if I should have wiped the disks as well?

zpool create fatagnus raidz2 ata-ST1000DM003-1CH162_S1DGL2PQ ata-ST32000542AS_5XW0L87Q ata-TOSHIBA_HDWD130_Z053RNKAS ata-TOSHIBA_HDWD130_Z053RR2AS ata-WDC_WD40EFZX-68AWUN0_WD-WX22DB1DF7PL ata-WDC_WD40EFZX-68AWUN0_WD-WX52DB1ASC33
invalid vdev specification
use '-f' to override the following errors:
raidz contains devices of different sizes

So I do:

zpool create -f fatagnus raidz2 ata-ST1000DM003-1CH162_S1DGL2PQ ata-ST32000542AS_5XW0L87Q ata-TOSHIBA_HDWD130_Z053RNKAS ata-TOSHIBA_HDWD130_Z053RR2AS ata-WDC_WD40EFZX-68AWUN0_WD-WX22DB1DF7PL ata-WDC_WD40EFZX-68AWUN0_WD-WX52DB1ASC33
zpool status fatagnus
  pool: fatagnus
 state: ONLINE
config:

        NAME                                          STATE     READ WRITE CKSUM
        fatagnus                                      ONLINE       0     0     0
          raidz2-0                                    ONLINE       0     0     0
            ata-ST1000DM003-1CH162_S1DGL2PQ           ONLINE       0     0     0
            ata-ST32000542AS_5XW0L87Q                 ONLINE       0     0     0
            ata-TOSHIBA_HDWD130_Z053RNKAS             ONLINE       0     0     0
            ata-TOSHIBA_HDWD130_Z053RR2AS             ONLINE       0     0     0
            ata-WDC_WD40EFZX-68AWUN0_WD-WX22DB1DF7PL  ONLINE       0     0     0
            ata-WDC_WD40EFZX-68AWUN0_WD-WX52DB1ASC33  ONLINE       0     0     0

errors: No known data errors
zpool list fatagnus
NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
fatagnus  5.45T  1.23M  5.45T        -         -     0%     0%  1.00x    ONLINE  -

And version:

zpool version
zfs-2.2.6-pve1
zfs-kmod-2.2.6-pve1

Have I by doing -f accepted an odd way of distributing data and parity?

zpool list shows you RAW capacity, not capacity after redundancy/parity. So, roughly 6T for six roughly 1T disks.

Either zfs list or df will show you the correct capacity estimate.

1 Like

As is often the case, Jim You are right :slight_smile:

zfs list fatagnus
NAME       USED  AVAIL  REFER  MOUNTPOINT
fatagnus   839K  3.52T   192K  /fatagnus

Time to start throwing the datasets back onto fatagnus.

In case anybody is wondering ‘Why is he wasting 2 x 4TB and 2 x 3 TB drives mixing them with a 2 TB and even a 1 TB drive?’ - the answer is that this is temporary. Tank is a 4 TB and an 8 TB drive and those two are going to replace the 1 TB and 2 TB drives, as soon as I have shifted the datasets.

When the 1 TB and 2 TB drives are out of the RaidZ2 pool, I will remove them from the box and replace them with two small’ish SSDs - 480 GB + 500 GB - and create a mirror for container and VM storage. I will have to split my Nextcloud LXD container storage between the OS and application on the SSD storage and then put the data on the spinning pool. Probably through a mount point in Proxmox.

I will be monitoring the performance of the 4 TB from Tank as it is a 2.5" hdd and thus most likely is an SMR drive - I might be replacing that before the two smaller 3 TB drives.