Large Expansion of ZFS dataset

I have a TrueNas server where the main data pool is reaching capacity. It currently uses a single VDEV with 5x8TB in RAIDZ1 configuration. RAIDZ2/Mirroring is not a requirement.

Over time, I have added drives and expanded the array (including full content rewrite to ensure most efficient rebalancing). These required downtime of minutes to install the new drive each time (which was awesome), and I’d like to minimise downtime this time too.

I have 1 slot left in the server to add a drive, but I know that adding that drive is only a temporary measure. I want to make a more long lasting change, perhaps with much larger drives (e.g. 3x20TB in RAIDZ1 with 3 more available slots, ultimately giving 6x20).

There are so many options for doing this… for example:

  • Fail a drive. Replace with a larger drive. Rinse and repeat 5 times. Then set auto-expand.
  • Replace drive then fail (I don’t understand this one, but it apparently keeps redundancy).
  • External e-Sata enclosure with a new array with new larger drives (q1: can I then just swap the server/enclosure drives?)(q2: Can I just set up an external vdev/mirror with large drives, then fail the internal mirror?)
  • Just keep adding drives in an e-sata enclosure. (seems risky to have parts of a vdev internal and external).

So what is the best strategy when one needs to expand, but is running out of slots internally?

Many thanks for your advice.

p.s. Yes I know that a backup, build new pool, restore would also work - but where is the ZFS magic in that :-). Plus I would still need to use an enclosure to keep uptime high.