views:

3076

answers:

4

We're trying to tune an application that accepts messages via TCP and also uses TCP for some of it's internal messaging. While load testing, we noticed that response time degrades significantly (and then stops altogether) as more simultaneous requests are made to the system. During this time, we see a lot of TCP connections in TIME_WATE status and someone suggested lowering the TIME_WAIT environment variable from it's default 60 seconds to 30.

From what I understand, the TIME_WAIT setting essentially sets the time a TCP resource is made available to the system again after the connection is closed.

I'm not a "network guy" and know very little about these things. I need a lot of what's in that linked post, but "dumbed down" a little.

  • I think I understand why the TIME_WAIT value can't be set to 0, but can it safely be set to 5? What about 10? What determines a "safe" setting for this value?
  • Why is the default for this value 60? I'm guessing that people a lot smarter than me had good reason for selecting this as a reasonable default.
  • What else should I know about the potential risks and benefits of overriding this value?

Thanks in advance!

+10  A: 

A TCP connection is specified by the tuple (source IP, source port, destination IP, destination port).

The reason why there is a TIME_WAIT state following session shutdown is because there may still be live packets out in the network on its way to you. If you were to re-create that same tuple and one of those packets show up, it would be treated as a valid packet for your connection (and probably cause an error due to sequencing).

So the TIME_WAIT time is generally set to double the packets maximum age. This value is the maximum age your packets will be allowed to get to before the network discards them.

That guarantees that, before your allowed to create a connection with the same tuple, all the packets belonging to previous incarnations of that tuple will be dead.

That generally dictates the minimum value you should use. The maximum packet age is dictated by network properties, an example being satellite lifetimes are higher than LAN lifetimes since the packets have much further to go.

paxdiablo
How can I determine the "maximum packet age"? Is this set be the OS, something on the network, or some software setting? BTW, the code "generating" most of these connections is a third party platform we don't have source for.Thanks for the great response!
Vinnie
The real name for it is maximum segment lifetime, MSL. Not sure you can change this in Windows or even if you should - it's meant to be set based on network characteristics. Windows sets it to 120s, I think. All TCP params are in HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters.
paxdiablo
+3  A: 

Pax is correct about the reasons for TIME_WAIT, and why you should be careful about lowering the default setting.

A better solution is to vary the port numbers used for the originating end of your sockets. Once you do this, you won't really care about time wait for individual sockets.

For listening sockets, you can use SO_REUSEADDR to allow the listening socket to bind despite the TIME_WAIT sockets sitting around.

Darron
I'll upvote any answer that begins with the phrase "Pax is correct". :-)
paxdiablo
+5  A: 

Usually, only the endpoint that issues an 'active close' should go into TIME_WAIT state. So, if possible, have your clients issue the active close which will leave the TIME_WAIT on the client and NOT on the server.

See here: http://www.isi.edu/touch/pubs/infocomm99/infocomm99-web/ for details (it also explains why it's not always possible due to protocol design that doesn't take TIME_WAIT into consideration).

Len Holgate
Good point, server will still have to wait for the ACK from its FIN but that should take less time. It's also good practice for the initiator to shut down the session since only it generally knows when it's finished.
paxdiablo
A: 

TIME_WAIT might not be the culprit.

int listen(int sockfd, int backlog);

According to Unix Network Programming Volume1, backlog is defined to be the sum of completed connection queue and incomplete connection queue.

Let's say the backlog is 5. If you have 3 completed connections (ESTABLISHED state), and 2 incomplete connections (SYN_RCVD state), and there is another connect request with SYN. The TCP stack just ignores the SYN packet, knowing it'll be retransmitted some other time. This might be causing the degradation.

At least that's what I've been reading. ;)

yogman
I'm pretty sure the backlog is only for connections that haven't yet reached ESTABLISHED; once they have, they're removed from the queue; they're only blocking more incoming connections until the (SYN,SYN/ACK,ACK) handshaking is complete, basically once the server returns from accept().
paxdiablo
(-1) No, the listen backlog is purely for connections that are not completely established; i.e. they have arrived at the TCP/IP stack but not yet been 'accepted'. If your listen backlog is too small then your server will refuse connections if connections come in more quickly than it can accept them.
Len Holgate
A minor misunderstanding. "completed connection queue. This queue contains an entry for each connection for which the three way handshake is completed. The socket is in the ESTABLISHED state. Each call to accept() removes the front entry of the queue." http://www.sean.de/Solaris/soltune.html
yogman