Optimal network transport for zfs-send/zfs-receive?

Can anyone point me to guidelines or best practices for configuring ZFS send/receive to maximize speed over dedicated fiber between servers? The 10Gbps NICs are running near rated speed with 9k mtu (according to iperf3), but I suspect that SSH is not the quickest pipe, at least in default configuration.

  • Would something like socat be better?
  • Or, an ssh that supports the “none” cipher?
  • Enable/Disable compression in ssh and/or zfs-send?
  • Worth messing around with kernel or NIC-driver tuning options?

Before I run a bunch of experiments I thought I’d ask – doubtless others have figured this out. :thinking:

Other details: all systems are currently FreeBSD 15 RELEASE. NICs are an assortment of Intel and Broadcom. Plenty of RAM, but CPUs of varying power.

CPU tends to be the bottleneck when you want to saturate 10Gbps or up. An awful lot of network protocols–and SSH isn’t an exception–run single-threaded, meaning you’re limited to the maximum single-threaded performance of your CPU when trying to send a single job down that 10Gbps pipe.

What you really need to do is figure out how to parallelize your processes so that you can get more than one CPU thread at a time involved in the send.