Benefits to using Dell perc H310 over onboard C236 SATA?

Hi,

I’ve got a ZFS datapool (set up in RAID 10) over 6 spinning rust disks (WD Red/Red+ - all CMR), and a system pool over 2 mirrored SSD’s, currently running on an Asrockrack C236M WS board, using the onboard C236 SATA chipset to drive the disks. All 8 ports are used. There’s no more space in the case, so I won’t be expanding the pool that way :wink:

I recently acquired a Dell PERC H310 flashed in IT mode, and I was wondering if I could gain anything with using this to drive the datapool vs the onboard controller?

I’m not using anything passthrough, so I don’t need it for that.

Thanks in advance for anyone who might know :wink:

1 Like

Yes, as long as you’re certain that H310 is in IT mode. When in IT mode, the H310 is effectively an LSI 9211-8i, which means it’ll support >= 1.5GiB/sec total (meaning all ports together) throughput, where your mobo SATA controller very likely won’t support more than about 650MiB/sec total throughput.

1 Like

Thanks!

One last question - will the device path change (ergo break my pools)? I’ve been using the by-id path so I would think not.

The scsi path would change, if you were using by-path. But the wwn and serial won’t change, so as long as you’re using one of those from by-id, nothing will change.

1 Like

I’ve finally gotten round to putting the H310 in. Was remarkably painless, all zpools just came up without an issue.

It also feels faster, but … I don’t know if this is a placebo effect or not. I had not really taken any benchmarks before.

Pool topology:

        datapool                                               ONLINE       0     0     0
          mirror-0                                             ONLINE       0     0     0
            ata-WDC_WD140EFGX-68B0GN0_Y5KYTGXC-part2           ONLINE       0     0     0
            ata-WDC_WD140EFGX-68B0GN0_Y6G4J06C-part2           ONLINE       0     0     0
          mirror-2                                             ONLINE       0     0     0
            ata-WDC_WD60EFRX-68L0BN1_WD-WX61D96AX3DV-part3     ONLINE       0     0     0
            ata-WDC_WD60EFRX-68L0BN1_WD-WX11DC7JHEKP-part3     ONLINE       0     0     0
          mirror-3                                             ONLINE       0     0     0
            ata-WDC_WD120EFBX-68B0EN0_5QJ4E4ZB-part2           ONLINE       0     0     0
            ata-WDC_WD120EFBX-68B0EN0_5QJJ908B-part2           ONLINE       0     0     0
        logs
          mirror-1                                             ONLINE       0     0     0
            ata-INTEL_SSDSC2BA100G3R_BTTV335209Y0100FGN-part2  ONLINE       0     0     0
            ata-INTEL_SSDSC2BA100G3R_BTTV3343004X100FGN-part2  ONLINE       0     0     0

(the logs vdev is there to offload some sync NFS writes)

I doubt it’s just placebo; six decent-spec rust drives in mirror vdevs is already more than enough to blow past the throughput cap on generic mobo SATA for some workloads.

1 Like