I use syncoid to transfer data between two servers on a gigabit LAN. The transfer speed is more than 100 MiB/s with larger datasets. However, with very small datasets (df says <20K) , the speed is very slow ~ 7KiB/s. Is there any way to speed the data-transfer for these small datasets?
What you’re seeing isn’t actually “data transfer speed” it’s the speed of your pool in handling small metadata operations. I’m guessing your pool is rust.
I can’t say if that’s what you “should” do, but it’s the only way you’re going to see replication speed up, short of moving to SSDs.
Replicating a ton of very small datasets instead of one large one is slow for the same reason copying 1GiB of 4KiB individual files goes (much, much) slower than copying a single 1GiB file would.