views:

460

answers:

6

As I understand it, TcpListener will queue connections once you call Start(). Each time you call AcceptTcpClient (or BeginAcceptTcpClient), it will dequeue one item from the queue.

If we load test our TcpListener app by sending 1,000 connections to it at once, the queue builds far faster than we can clear it, leading (eventually) to timeouts from the client because it didn't get a response because its connection was still in the queue. However, the server doesn't appear to be under much pressure, our app isn't consuming much CPU time and the other monitored resources on the machine aren't breaking a sweat. It feels like we're not running efficiently enough right now.

We're calling BeginAcceptTcpListener and then immediately handing over to a ThreadPool thread to actually do the work, then calling BeginAcceptTcpClient again. The work involved doesn't seem to put any pressure on the machine, it's basically just a 3 second sleep followed by a dictionary lookup and then a 100 byte write to the TcpClient's stream.

Here's the TcpListener code we're using:

    // Thread signal.
    private static ManualResetEvent tcpClientConnected = new ManualResetEvent(false);

    public void DoBeginAcceptTcpClient(TcpListener listener)
    {
        // Set the event to nonsignaled state.
        tcpClientConnected.Reset();

        listener.BeginAcceptTcpClient(
            new AsyncCallback(DoAcceptTcpClientCallback),
            listener);

        // Wait for signal
        tcpClientConnected.WaitOne();
    }

    public void DoAcceptTcpClientCallback(IAsyncResult ar)
    {
        // Get the listener that handles the client request, and the TcpClient
        TcpListener listener = (TcpListener)ar.AsyncState;
        TcpClient client = listener.EndAcceptTcpClient(ar);

        if (inProduction)
            ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client, serverCertificate));  // With SSL
        else
            ThreadPool.QueueUserWorkItem(state => HandleTcpRequest(client));  // Without SSL

        // Signal the calling thread to continue.
        tcpClientConnected.Set();
    }

    public void Start()
    {
        currentHandledRequests = 0;
        tcpListener = new TcpListener(IPAddress.Any, 10000);
        try
        {
            tcpListener.Start();

            while (true)
                DoBeginAcceptTcpClient(tcpListener);
        }
        catch (SocketException)
        {
            // The TcpListener is shutting down, exit gracefully
            CheckBuffer();
            return;
        }
    }

I'm assuming the answer will be related to using Sockets instead of TcpListener, or at least using TcpListener.AcceptSocket, but I wondered how we'd go about doing that?

One idea we had was to call AcceptTcpClient and immediately Enqueue the TcpClient into one of multiple Queue<TcpClient> objects. That way, we could poll those queues on separate threads (one queue per thread), without running into monitors that might block the thread while waiting for other Dequeue operations. Each queue thread could then use ThreadPool.QueueUserWorkItem to have the work done in a ThreadPool thread and then move onto dequeuing the next TcpClient in its queue. Would you recommend this approach, or is our problem that we're using TcpListener and no amount of rapid dequeueing is going to fix that?

+2  A: 

I've whipped up some code that uses sockets directly, but I lack the means of performing a load test with 1000 clients. Could you please try to test how this code compares to your current solution? I'd be very interested in the results as I'm building a server that needs to accept a lot of connections as well right now.

static WaitCallback handleTcpRequest = new WaitCallback(HandleTcpRequest);

static void Main()
{
    var e = new SocketAsyncEventArgs();
    e.Completed += new EventHandler<SocketAsyncEventArgs>(e_Completed);

    var socket = new Socket(
        AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
    socket.Bind(new IPEndPoint(IPAddress.Loopback, 8181));
    socket.Listen((int)SocketOptionName.MaxConnections);
    socket.AcceptAsync(e);

    Console.WriteLine("--ready--");
    Console.ReadLine();
    socket.Close();
}

static void e_Completed(object sender, SocketAsyncEventArgs e)
{
    var socket = (Socket)sender;
    ThreadPool.QueueUserWorkItem(handleTcpRequest, e.AcceptSocket);
    e.AcceptSocket = null;
    socket.AcceptAsync(e);
}

static void HandleTcpRequest(object state)
{
    var socket = (Socket)state;
    Thread.Sleep(100); // do work
    socket.Close();
}
dtb
Thanks! I'll give this a try now.
Matthew Brindley
+1 This is more in line with the way I do it. In short, don't depend on your main thread to start each asynchronous accept operation. Kick off the asynchronous accept from the main thread the first time and then each subsequent asynchronous accept from the callback method itself.
Matt Davis
+1  A: 

Unless I'm missing something, you're calling BeingAcceptTcpClient, which is asynchronous, but then you're calling WaitOne() to wait until the asynchronous code finishes , which effectively makes the process synchronous. Your code can only accept one client at a time. Or am I totally crazy? At the very least, this seems like a lot of context switching for nothing.

Jonathan Beerhalter
There can be only one call to BeingAcceptTcpClient at the same time. But you're right: the use of the ManualResetEvent throttles the acceptance rate unnecessarily.
dtb
The code above is from MSDN: http://msdn.microsoft.com/en-us/library/system.net.sockets.tcplistener.beginaccepttcpclient.aspx - so you're saying I should call Begin again on the first line of the DoAcceptTcpClientCallback method? Does End not have to be called first before another call to Begin?
Matthew Brindley
You can call listener.BeginAcceptTcpClient at the **end** of DoAcceptTcpClientCallback (after you have called listener.EndAcceptTcpClient). The ManualResetEvent is not necessary. But the Begin/EndAcceptTcpClient methods have some performance issues, which is why the AcceptAsync method was added to the Socket class.
dtb
Ah ok, I see. I'm not sure calling Begin again vs using the ManualResetEvent would improve performance much? I've changed it now to remove the ManualResetEvent and I'm calling Begin after End, but before the ThreadPool stuff, I'll see how that performs.
Matthew Brindley
A: 

The first thing to ask yourself is "is 1000 connections all at once reasonable". Personally I think it's unlikely that you will get into that situation. More likely you have 1000 connections occurring over a short period of time.

I have a TCP test program that I use to test my server framework, it can do things like X connections in total in batches of Y with a gap of Z ms between each batch; which I personally find is more real world than 'vast number all at once'. It's free, it might help, you can get it from here: http://www.lenholgate.com/archives/000568.html

As others have said, increase the listen backlog, process the connections faster, use asynchronous accepts if possible...

Len Holgate
i'm using seige (google it) software to simulate stress test, but it's http only thing :)
hoodoos
and is 1000 connections all at once something you really expect to have happen in production?
Len Holgate
A: 

Just a suggestion : why not accept the clients synchronously (by using AcceptTcpClient instead of BeginAcceptTcpClient), and then process the client on a new thread ? That way, you won't have to wait for a client to be processed before you can accept the next one.

Thomas Levesque
A: 

It was alluded to, in the other questions, but I would suggest in your tcpListener.Start() method, use the overload that allows you to set the backlog to a number higher than the maximum number of connections you're expecting at one time:


    public void Start()
    {
        currentHandledRequests = 0;
        tcpListener = new TcpListener(IPAddress.Any, 10000);
        try
        {
            tcpListener.Start(1100);  // This is the backlog parameter

            while (true)
                DoBeginAcceptTcpClient(tcpListener);
        }
        catch (SocketException)
        {
            // The TcpListener is shutting down, exit gracefully
            CheckBuffer();
            return;
        }
    }

Basically, this option sets how many "pending" TCP connections are allowed that are waiting for an Accept to be called. If you are not accepting connections fast enough, and this backlog fills up, the TCP connections will be automatically rejected, and you won't even get a chance to process them.

As others have mentioned, the other possibility is speeding up how fast you process the incoming connections. You still, however, should set the backlog to a higher value, even if you can speed up the accept time.

Steve Wranovsky
A: 

go to www.protocol-builder.com where u can build ur complete protocol

Zamir