Check my work before I create the pool?

I am building a home media server from a decade old Supermicro 4u and 8x 14GB SAS drives,

Host is a clean install of Debian 12 bookworm with XFCE,

Debian does not ship with OpenZFS , but it is available from repository, I lifted installation instructions from ZFS - Debian Wiki, I assume this the best method/repository to use with bookworm? also referencing ZFS tuning cheat sheet – JRS Systems: the blog

First step appending “contrib non-free” to all 6 entries in /etc/apt/sources.list

#install ZFS utilities from backports

sudo apt update

sudo apt install linux-headers-amd64

sudo apt install -t buster-backports zfsutils-linux

#Create ZFS pool, and my first real question why does ls -l /dev/disk/by-id/ list my drives twice? HBA thing?

#do I want scsi-3 or wwn-0x prefix on the ID??

zpool create ocean raidz2 scsi-35000cca2ad1aaff8 scsi-35000cca2ad1aca44 scsi-35000cca2ad1aed0c scsi-35000cca2ad1af534 scsi-35000cca2ad1af928 scsi-35000cca2ad1afe4c scsi-35000cca2ad1afef4 scsi-35000cca2ad1b0318

#vs

zpool create ocean raidz2 wwn-0x5000cca2ad1aaff8 wwn-0x5000cca2ad1aca44 wwn-0x5000cca2ad1aed0c wwn-0x5000cca2ad1af534 wwn-0x5000cca2ad1af928 wwn-0x5000cca2ad1afe4c wwn-0x5000cca2ad1afef4 wwn-0x5000cca2ad1b0318

#should I set -o ashift=12 or just trust that ZFS will see my 4k sectors automatically?

#disable recording of access time

zfs set atime=off ocean

#enable lz4 compression

zfs set compression=on ocean

#create datasets

mkdir -p /dataset
zfs create -o mountpoint=/dataset ocean/dataset

#set large record size for large files

zfs set recordsize=1M ocean/

BOM

Supermicro SC846B 24 drive bay Chassis

X9DRI-LN4F + motherboard

E5-2695 Ivy Bridge 12 cores 24 threads x2

256GB EEC DDR3 Samsung M393B2G70QH0-YK0 16GB x16 sticks

HBA 2008 chip-set LSI SAS9220-8i, H3-25097-03B, in IT mode

BPN-SAS2-846El1 Backplane

x2 800W quiet “SQ” power supplies

OS drive: 1TB Samsung 860 Evo 2.5" SSD on SATA 0 Btrfs

8x 14TB WD Ultrastar DC HC530 WUH721414AL4204 0F31021 Single Port SAS 4Kn, single Vdev 8x Z2

1x tested cold spare identical 14TB drive

VGA monitor, Matrox onboard video

USB mouse & Keyboard

Target performance: 1Gb LAN, 1-2 video streams, near future possible upgrade to better switch to leverage 4x 1Gb link aggregation, may need a GPU for trans-code. standard NAS duties over NFS.

drive/HBA/baackplane performance is good at 2.2GB/s peak in Badblocks.

#Create ZFS pool, and my first real question why does ls -l /dev/disk/by-id/ list my drives twice? HBA thing?

#do I want scsi-3 or wwn-0x prefix on the ID??

There are generally at least two entries for one drive in by-id: the scsi/sata/sas one, and the wwn one. Usually, the first incorporates the drive’s model and serial number. The wwn one always incorporates the WWN ID of the drive, which is a globally unique (even across manufacturers) ID every “real” drive (thumbdrives, cf cards, etc don’t generally get one) has baked into its firmware.

Which you use is a matter of preference. Some prefer the SATA/SAS/SCSI one because it also reminds them of the model of the drive in question. Personally, I prefer the WWN, because the SATA/SAS/SCSI one is painfully long, and because I like to use the last four digits of the WWN as a physical label for the outside of the drive where it’s easy to see when you need to replace it.

Note: the WWN is also printed on the OEM label on nearly all makes and models of drive. The only exception I can recall is the old HGST Helium drives (RIP) which included the manufacturer serial but not the WWN on the OEM-printed label. (I still chose WWN for those drives, because I make my own, more visible labels anyway.)

mkdir -p /

zfs create -o mountpoint=/ tank/

This doesn’t work. It looks like you’re trying to go with ZFS on root, but if you want that, you need https://zfsbootmenu.org/ – which does work just fine, and will take care of this for you.

Let’s say instead that you wanted a pool named tank mounted on /foo. That would look a little different also, because the root dataset of tank isn’t something you can manually create; it was automatically created during the zpool create tank raidz2 d1 d2 d3 d4 d5 d6 d7 d8 stage.

Instead, you’d do this:

root@yourbox:~# mkdir -p /foo
root@yourbox:~# zfs set mountpoint=/foo tank

You asked about setting ashift=12 manually. With these particular drives, I’m fairly sure it won’t matter. Knock on wood, but so far I haven’t encountered any rust drives that lie about their hardware sector size, only SSDs.

With that said, it’s a good idea to never take ashift for granted. Do as much research as you can on the drives you want to use, and set ashift manually and appropriately. When in doubt, use fio to test the 64K sequential write performance of the drive in question at each potential value of ashift, and go with the one that produced the lowest and most predictable, reliable latency for the drive in question.