Unfortunately you haven't provided much details of your server's nature. I suppose you are writing a typical TCP server. In this answer I will not talk about any Java-specific details.
The short advice is: insert a delay between clients connections. Without it you are actively simulating a DoS attack to your server.
For the longer one, read below.
Usually a TCP server creates only 1 listening socked by calling (in lovely C interface) int sockfd = socket(...)
function, and passing the result (sockfd
in our case) to bind()
and listen()
functions. After this preparations, the server would call an accept()
which will steep the server in slumber (if the socket was marked as blocking) and if a client on the other side of the Earth will start calling a connect()
function, than accept()
(on the server side) with the support of the OS kernel will create the connected socket. The actual number of possible pending connectins can be known by looking at the listen()
function.
listed()
has a backlog parameter which defines the maximum number of connection the OS kernel should queue to the socket (this is basically a sum of all connections in SYN_RCVD
and ESTABLISHED
states). Historically the recommended value for backlog in 1980s was something like 5 which is obviously miserable in our days. In FreeBSD 7.2, for example, a hard limit for backlog may be guessed by typing:
% sysctl kern.ipc.somaxconn
kern.ipc.somaxconn: 128
and in Fedora 10:
% cat /proc/sys/net/core/somaxconn
128
P.S.
Sorry for my terrible English.