What are use cases for syncoid `--insecure-direct-connection`?

Anyone here using syncoid’s --insecure-direct-connection option? From what I can tell it is a somewhat recent feature from version 2.2.

I personally have no real use cases for it, so trying to get a feel for where people are using it and what alternative I can add in my synoid port chithi. It is pretty much the only feature that I didn’t directly port over from syncoid.

Some people are concerned with a potential throughput limitation due to the SSH tunnel; this isn’t necessarily even about the encryption itself so much as the raw overhead. Direct connection is exactly what it sounds like; it’s a very direct UDP connection from A to B with no authentication or encryption.

I absolutely, positively, one thousand percent do not recommend using it. But theoretically, if you had no security concerns whatsoever and wanted the absolute maximum network throughput–which, typically, isn’t necessary without a 10Gbps or faster path from A to B–you’d use that direct connection.

I think zrepl offers mTLS connections exactly for this reason. Was kind of thinking of doing something with mTLS as well for chithi, which gets us both security and throughput. But even the author of zrepl said that ssh was good enough for all this workloads in one of his presentations.

Also, pretty sure it is a TCP connection not UDP, and that’s probably what you meant to say.

I didn’t implement the feature and opposed its inclusion for a long time before finally relenting, so long as the --insecure part of the argument was part of the argument name. I’m speaking from dim memory of the feature being based on nc–which can do either TCP or UDP–but haven’t really dug much deeper into it than that.

Totally understand why you opposed it. From both the “do we really need this” and security points.

But I also understand that people want their replications to go brrr…

1 Like

I appreciate that the flag clearly states it brings no security.

A direct NIC-to-NIC connection (or something effectively the same) or routing plaintext replication over a WireGuard interface are use cases where you might find this useful.

I’ve experimented with streaming replications over WireGuard via netcat, but I find SSH (WireGuard or not) with Syncoid’s default to be more useful and much less finicky. With netcat you have to do setup and tear down on both sides.

1 Like

What really would help this issue–far more than stripping out authentication and encryption overhead–is figuring out some kind of multi (CPU) threaded tunnel.

Saturating 1Gbps on the compute side (where SSH can bottleneck, if the storage is fast enough) is beyond trivial–any tinkertoy CPU can generally manage that these days.

When you hit 10Gbps and up, you start bottlenecking on single threaded performance whether you’re using SSH or not–Samba and NFS (and SMB and NFS under Windows) bottleneck because each transfer runs on a single non forked PID, so while you can saturate a 10Gbps LAN with multiple concurrent network transfers, it becomes difficult to impossible to manage it with a single, single CPU threaded process.

This could also, in theory, minimize some issues with long TCP pipes with small TCP windows. That’s an entirely different issue that can cause SSH to bottleneck way below 1Gbps, on long distance routes with lots of hops and poor or inconsistent latency. (Switching to the HPN-SSH fork can also help with this, though not with the single thread saturation issue.)