Why is it that TCP connections to a loopback interface end up in TIME_WAIT
(socket closed with SO_DONTLINGER
set), but identical connections to a different host do not end up in TIME_WAIT
(they are reset/destroyed immediately)?
Here are scenarios to illustrate:
(A) Two applications, a client and a server, are both running on the same Windows machine. The client connects to the server via the server's loopback interface (127.0.0.1, port xxxx), sends data, receives data, and closes the socket (SO_DONTLINGER
is set).
Let's say that the connections are very short-lived, so the client app is establishing and destroying a large number of connections each second. The end result is that the sockets end up in TIME_WAIT
, and the client eventually exhausts its max number of sockets (on Windows, this is ~3900 by default, and we are assuming that this value will not be changed in the registry).
(B) Same two applications as scenario (A), but the server is on a different host (the client is still running on Windows). The connections are identical in every way, except that they are not destined for 127.0.0.1, but some other IP instead. Here the connections on the client machine do NOT go into TIME_WAIT
, and the client app can continue to make connections indefinitely.
Why the discrepancy?