I recently stumbled on an interesting TCP performance issue while running some performance tests that compared network performance versus loopback performance. In my case the network performance exceeded the loopback performance (1Gig network, same subnet). In the case I am dealing latencies are crucial, so TCP_NODELAY is enabled. The best theory that we have come up with is that TCP congestion control is holding up packets. We did some packet analysis and we can definitely see that packets are being held, but the reason is not obvious. Now the questions...
1) In what cases, and why, would communicating over loopback be slower than over the network?
2) When sending as fast as possible, why does toggling TCP_NODELAY have so much more of an impact on maximum throughput over loopback than over the network?
3) How can we detect and analyze TCP congestion control as a potential explanation for the poor performance?
4) Does anyone have any other theories as to the reason for this phenomenon? If yes, any method to prove the theory?
Here is some sample data generated by a simple point to point c++ app:
Transport Message Size (bytes) TCP NoDelay Send Buffer (bytes) Sender Host Receiver Host Throughput (bytes/sec) Message Rate (msgs/sec) TCP 128 On 16777216 HostA HostB 118085994 922546 TCP 128 Off 16777216 HostA HostB 118072006 922437 TCP 128 On 4096 HostA HostB 11097417 86698 TCP 128 Off 4096 HostA HostB 62441935 487827 TCP 128 On 16777216 HostA HostA 20606417 160987 TCP 128 Off 16777216 HostA HostA 239580949 1871726 TCP 128 On 4096 HostA HostA 18053364 141041 TCP 128 Off 4096 HostA HostA 214148304 1673033 UnixStream 128 - 16777216 HostA HostA 89215454 696995 UnixDatagram 128 - 16777216 HostA HostA 41275468 322464 NamedPipe 128 - - HostA HostA 73488749 574130
Here are a few more pieces of useful information:
Thank You
1) In what cases, and why, would communicating over loopback be slower than over the network?
Loopback puts the packet setup+tcp chksum calculation for both tx+rx on the same machine, so it needs to do 2x as much processing, while with 2 machines you split the tx/rx between them. This can have negative impact on loopback.
2) When sending as fast as possible, why does toggling TCP_NODELAY have so much more of an impact on maximum throughput over loopback than over the network?
Not sure how you've come to this conclusion, but the loopback vs network are implemented very differently, and if you try to push them to the limit, you will hit different issues. Loopback interfaces (as mentioned in answer to 1) cause tx+rx processing overhead on the same machine. On the other hand, NICs have a # of limits in terms of how many outstanding packets they can have in their circular buffers etc which will cause completely different bottlenecks (and this varies greatly from chip to chip too, and even from the switch that's between them)
3) How can we detect and analyze TCP congestion control as a potential explanation for the poor performance?
Congestion control only kicks in if there is packet loss. Are you seeing packet loss? Otherwise, you're probably hitting limits on the tcp window size vs network latency factors.
4) Does anyone have any other theories as to the reason for this phenomenon? If yes, any method to prove the theory?
I don't understand the phenomenon you refer to here. All I see in your table is that you have some sockets with a large send buffer - this can be perfectly legitimate. On a fast machine, your application will certainly be capable of generating more data than the network can pump out, so I'm not sure what you're classifying as a problem here.
One final note: small messages create a much bigger performance hit on your network for various reasons, such as: