views:

8884

answers:

4

I am programming a server and it seems like my number of connections is being limited since my bandwidth isn't being saturated even when I've set the number of connections to "unlimited."

How can I increase or eliminate a maximum number of connections that my ubuntu linux box can open at a time? Does the OS limit this, or is it the router or the ISP? or is it something else?

Please let me know.

Thanks in advance, jbu

A: 

duplicate of http://stackoverflow.com/questions/410481/increasing-the-maximum-number-of-tcp-ip-connections-in-linux-closed

warren
yes, but this one is more programming related so it won't be closed, and the other question wasn't answered
+7  A: 

There are a couple of variables to set the max number of connections. Most likely, you're running out of file numbers first. Check ulimit -n. After that, there are settings in /proc, but those default to the tens of thousands.

More importantly, it sounds like you're doing something wrong. A single TCP connection ought to be able to use all of the bandwidth between two parties; if it isn't:

  • Check if your TCP window setting is large enough. Linux defaults are good for everything except really fast inet link (hundreds of mbps) or fast satellite links. What is your bandwidth*delay product?
  • Check for packet loss using ping with large packets (ping -s 1472 ...)
  • Check for rate limiting. On Linux, this is configured with tc
  • Confirm that the bandwidth you think exists actually exists using e.g., iperf
  • Confirm that your protocol is sane. Remember latency.
  • If this is a gigabit+ LAN, can you use jumbo packets? Are you?

Possibly I have misunderstood. Maybe you're doing something like Bittorrent, where you need lots of connections. If so, you need to figure out how many connections you're actually using (try netstat or lsof). If that number is substantial, you might:

  • Have a lot of bandwidth, e.g., 100mbps+. In this case, you may actually need to up the ulimit -n. Still, ~1000 connections (default on my system) is quite a few.
  • Have network problems which are slowing down your connections (e.g., packet loss)
  • Have something else slowing you down, e.g., IO bandwidth, especially if you're seeking. Have you checked iostat -x?

Also, if you are using a consumer-grade NAT router (Linksys, Netgear, DLink, etc.), beware that you may exceed its abilities with thousands of connections.

I hope this provides some help. You're really asking a networking question.

derobert
Thank you for answering, I really appreciate it.
A: 

I have a TCP connection issue in ubuntu, mobile devices gets connected to my ubuntu server, the server program handles the connection (closing after rcving string data or idle), sometimes the 3G device goes off for short time and during that the connection remain ESTABLISHED as it appears by result of (netstat --tcp --numeric --all) till rebooting the server, in such case I might see many established connections for one client but only one connection is real, any idea how to force ubunu to check or reset un used established connections ?

THANX

Eby
A: 

Maximum number of connections are impacted by certain limits on both client & server sides, albeit a little differently.

On the client side: Increase the ephermal port range, and decrease the fin_timeout To find out the default values:

sysctl net.ipv4.ip_local_port_range
sysctl net.ipv4.tcp_fin_timeout

The ephermal port range defines the maximum number of outbound sockets a host can create from a particular I.P. address. The fin_timeout defines the minimum time these sockets will stay in TIME_WAIT state (unusable after being used once). Usual system defaults are:

  • net.ipv4.ip_local_port_range = 32768 61000
  • net.ipv4.tcp_fin_timeout = 60 This basically means your system cannot guarantee more than (61000 - 32768) / 60 = 470 sockets at any given time. If you are not happy with that, you could begin with increasing the port_range. Setting the range to 15000 61000 is pretty common these days. You could further increase the availability by decreasing the fin_timeout. Suppose you do both, you should see over 1500 outbound connections, more readily.

On the Server Side: The net.core.somaxconn value has an important role. It limits the maximum number of requests queued to a listen socket. If you are sure of your server application's capability, bump it up from default 128 to something like 128 to 1024. Now you can take advantage of this increase by modifying the listen backlog variable in your application's listen call, to an equal or higher integer.

txqueuelen parameter of your ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more if your system can handle it.

Similarly bump up the values for net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog. Their default values are 1000 and 1024 respectively.

Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell.

mdk