Hi Shad! Welcome to the exciting world of ZFS
and congratulations on jumping in the deep end off the 10m platform on day one!
This might just be your BIOS taking a while to start up? I have a HP Z440 and it takes longer to start booting than other machines I have had. It’s not as bad as my HP ML110 though, the server BIOS takes forever.
The binary images from the ZBM website aren’t signed AFAIK. There are ways of signing things if you really want to, and there are reasons for doing so, but I wouldn’t personally bother at this stage. There are many other linux things more intersting to play with, and driveby malware isn’t quite such an urgent risk with linux (not that there aren’t exploits out there).
RAID Nomenclature is a little different with ZFS. RAIDZ1/2/3 is mostly analogous to RAID5/6 albeit with striping organised at a file block level rather than strictly striped across disk as in RAID. What would be called “RAID0”, i.e. striping, doesn’t really have a name in ZFS as it’s simply a pool that has multiple VDEVs in it - data is always striped across VDEVs weighted according to factors like the size and remaining capacity on the different VDEVs (so again a little more complex). “RAID1” is called a “mirror” and again is at the VDEV level, so it’s typical therefore to have more than one mirror VDEV in a pool, i.e. the pool is striped across mirrors, which would be called “RAID10” in standard RAID terms.
Notably you can also stripe across RAIDZ VDEVS too (what would that be, RAID50?) and also mix and match VDEV types in the same pool - e.g. striping across a 2xMirror, a 3xMirror, a RAIDZ1 and a RAIDZ3, although given the wildly different performance characteristics of the VDEV types, it’s unlikely anyone would do that other than for convoluted historical reasons.
There is also dRAID, which is similar to RAIDZ with the added benefit that spare storage is striped across all volumes for faster “resilvering” (recovery) in case of drive failure - but this is an advanced setup that really only makes sense for large enterprise disk shelves that have multiple hot-spare devices on standby.
@mercenary_sysadmin , founder of this site and the OpenZFS project’s long time go-to guy for education and community outreach, has some very strong opinions about the superiority of striped mirrors (i.e. RAID10 equivalent) over RAIDZ (i.e. RAID5/6 equivalent) due to ZFS specific details of the way each are implemented.
My take away from watching how people use these for a couple of years now is that RAIDZ is generally best reserved for bulk storage using a large number of disks (8+?)… think backup servers, video archives, and so on. Multiple two-way mirrors delivers much higher performance, recoverability, and flexibility, and doesn’t really have much of a space penalty compared to RAIDZ when using only a handful of mirrors. More knowledgeable ZFS users might disagree though?
With mirrors, it’s trivial:
zpool attach my-pool disk1p2 disk2p2
will turn your single disk1 VDEV into a mirror. Check the results with
zpool status
Then when you get the next two drives, you can partition them and then run:
zpool add my-pool mirror disk3p2 disk4p2
That will add a second mirror made of your zfs partitions on disk3 and 4. Any NEW data will be striped across both mirror VDEVs. Existing data remains on the first mirror. This is not a problem from a space point of view, but means that you lose a little bit of theoretical performance for data that is only being read from 2 drives instead of all 4.
There is a hack you can do to rebalance this by using zfs send to make a 2nd copy of the datasets, which will be balanced across the vdevs, and then deleting the old unbalanced datasets and renaming the new balanced datasets back to the old names. For a root-on-ZFS setup like yours, you would need to do this from the live USB image again as you would have a broken root filesystem during this migration.
Now, with RAIDZ, I don’t know of any way to convert a single drive into a RAIDZ volume, although again more knowledgeable users might. This doesn’t mean you would need to lose your installation though - you could just add a small extra disk, even a USB thumb drive would to, and create a single drive ZPOOL on it, then you can “zfs send” your painstakingly created datasets to that for safe keeping, then wipe the ZFS information from your disk with
zpool labelclear -f disk1p2
Then partition your other 3 SSDs, and create a new pool with a single RAIDZ vdev made out of the 4 partitions. You can then “zfs send” your datasets back, and copy zfsbootmenu into any EFI partitions you made on the other 3 disks for redundancy, and use efibootmgr to register all of them with the UEFI firmware.
Those “scripts” (actually there are manual steps in there too, watch out!) are just for building a custom zfsbootmenu with SSH access and/or a bleeding-edge version of ZFS. Unless you really need off-site SSH access during boot right now, I wouldn’t bother with making your own custom zfsbootmenu image, and would stick to using the EFI image you downloaded already from the zfsbootmenu website.
P.S. - hope you get well soon!