views:

497

answers:

3

Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation).

I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them.

But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives.

If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues

  1. The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority.

  2. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me.

Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

UPDATE:

So I think that I was mis-understanding the async pattern in (2) above. I would get a callback on one worker thread when there was data available. Then I would begin another async receive and get another callback, etc. I wouldn't get N callbacks at the same time.

The question still is the same though. Is there any reason that the callbacks would be better in my specific situation where I'm the client and only connected to one server.

+3  A: 

The number one rule of performance is only try to improve it when you have to.

I see you mention standards but never mention problems, if you are not having any, then you don't need to worry what the standards say.

Guvante
True, there is no issue now. But its hard to measure. I could set up two versions of the app, one with the async pattern and one with the sync pattern and measure it. But it would be nice to know if someone else has already done this and can tell me that async will be faster and why.
Michael Covelli
+3  A: 

The slowest part of your application will be the network communication. It's highly likely that you will make almost no difference to performance for a one thread, one connection client by tweaking things like this. The network communication itself will dwarf all other contributions to processing or context switching time.

Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data.

Why is that going to happen? If you have one socket, you Begin an operation on it to receive data, and you get exactly one callback when it's done. You then decide whether to do another operation. It sounds like you're overcomplicating it, though maybe I'm oversimplifying it with regard to what you're trying to do.

In summary, I'd say: pick the simplest programming model that gets you want you want; of the choices available in your scenario, they would be unlikely to make any noticeable difference to performance whichever one you go with. With the blocking model, you're "wasting" a thread that could be doing some real work, but hey... maybe you don't have any real work for it to do.

Daniel Earwicker
Thanks for your reply. You're right, I was looking at this example wrong. So I Begin and receive a callback on some threadpool thread when there's data. So I process it and can then decide that I want to continue listening for more and get another callback. But even if this is the case, I don't see how this could be faster than just having one high priority thread dedicated to doing the Receive().
Michael Covelli
First off, I'd avoid setting the priority of threads. I'm pretty sure it's generally not recommended. Better to let all threads that have work to do get equal access to the CPU. Secondly, my point is that it will *not* be faster. The value of async APIs is that they allow you to avoid tying up a thread (an expensive resource due to the size of a call stack) in pure waiting, and if you only have one connection and one thread, and nothing to do until you get the data back, then there is really no great cost in tying up one thread. The exception is if you use Silverlight, as it only has async.
Daniel Earwicker
"Faster" is almost completely moot in this scenario, because these kinds of choices will be lost in the noise compared to the cost of the network communication itself. There is literally nothing to be gained in performance by switching between these techniques.
Daniel Earwicker
I think you're right. I don't think switching to async will help me. It seems to me that the async pattern and the SocketAsyncEventArgs approach which avoids the IAsyncResult alloc are appropriate for a server with dozens of client connections. But I was just skeptical of my thinking because literally every site that I search talks about how great the .NET async pattern is for high performance sockets.
Michael Covelli
Why would changing the thread priority be a bad idea? Say I have one thread to check for certain compliance checks. Wouldn't it make sense to set that to a low priority while the thread consuming market data was set as high as possible. Even if the scheduler ignores this, than at least I'm telling it that if you can only run one of these two, run the high priority one first.
Michael Covelli
It *can* have benefits, but it's a form of tuning or tweaking that you'd apply to an application in special situations rather than something you'd do as a habit. For the overwhelming majority of situations, it makes no difference. There are threads ready to do work, and threads waiting for IO results. If there's work to be done in a server app, the sooner it's done, the better. So all threads ready to do work (not blocked on IO) may as well have equal priority. In a GUI, the user has priority, so Windows bumps up the priority of the foreground app.
Daniel Earwicker
+1  A: 

"This class was specifically designed for network server applications that require high performance."

As I understand, you are a client here, having only a single connection.
Data on this connection arrives in order, consumed by a single thread.

You will probably loose performance if you instead receive small amounts on separate threads, just so that you can assemble them later in a serialized - and thus like single-threaded - manner.
Much Ado about Nothing.


You do not really need to speed this up, you probably cannot.

What you can do, however is to dispatch work units to other threads after you receive them. You do not need SocketAsyncEventArgs for this. This might speed things up.

As always, measure & measure.
Also, just because you can, it does not mean you should.
If the performance is enough for the foreseeable future, why complicate matters?

andras
I agree. That was my thinking also. If I could get multiple callbacks on multiple ThreadPool threads, that seems like a bad idea. But I was skeptical because every article on socket perf that I've read for .NET sockets says how great the async sockets are (and I'm sure they are much better for many applications). I just wanted to make sure I wasn't missing something for my particular case.
Michael Covelli