views:

485

answers:

3

How to achieve low latency of WCF server in publish-subscribe scenario? Specifically, clients subscribe to data and receive updates, latency in question is between data changing and client receiving the change? CPU, memory, bandwidth requirements are not important and can be high.

Basics are obvious: binary serialization, named pipes, etc. But does it makes sense, for example, to send data through always-connected stream? Or to send batch of updates as a single message to decrease RPC/header overhead?

May be there are some projects with code or interfaces available to use as example?

+1  A: 

If you use a duplex channel, you can have clients connect to the server and in doing so pass another service contract as a callback. The server then uses this callback to send updates to the client as they become available.

I've written an in-house pub-sub mechanism using this approach and latency is about as low as you would expect possible via WCF.

What is your target performance?

This MSDN article discusses using a WCF duplex channel.

Drew Noakes
That's basically a definition of pub-sub in WCF... Now, there are a lot of detail within "pub-sub in WCF with callbacks" approach that affect performance and latency. Target performance is hard to quantify - I'm looking at 1ms tops, but this number is meaningless without describing data size distribution, update rate and available resources. So I'll settle with "as low as possible" solution.
ima
One issue that I haven't resolved to my satisfaction with a WCF pub sub solution is the repeated serialisation of update objects -- once per client. When I've needed objects to be concisely represented on the wire, and can afford to have client and server tightly coupled, I've made my messages of type `byte[]` and done custom serialisation on server and client using purpose-built assemblies that were generated using Reflection.Emit at startup from a data dictionary.
Drew Noakes
I can see the advantage, but in my case data itself is composed of named elements of basic types or byte arrays already.
ima
+1  A: 

Not a comprehensive solution but: To reduce latency associated with size of data and network transfer speeds you could use google protocol buffers to compress your data over the wire. GitHub projects is here.

Matthew
Aren't protocol buffers just means of binary serialization? Do they have any significant advantages over WCF means of doing the same?
ima
WCF doesn't do this kind of efficient binary serialization read this: http://stackoverflow.com/questions/475794/how-fast-or-lightweight-is-protocol-buffer I can't go very deep into an explanation.
Matthew
Check out the performance comparisons of Marc Gravell's implementation: http://code.google.com/p/protobuf-net/wiki/Performance
Matthew
Yes, benchmark results are very attractive. It's not obvious to me where this kind of performance gain - orders of magnitude - comes from, it calls for deeper investigation. I'll come back to it.
ima
DataContractJsonSerializer results in less size than BinaryFormatter? Something strange must be going in that test, I don't believe that comparison is valid for communication I have in mind.
ima
Update on my tests: Protocol Buffers are basically straight-forward C-style serialization, and they are in fact much more efficient than binary-xml of WCF (which contains lots of meta-data). But there's a catch - protocol buffers make your de/serialization faster, but WCF calls... are still more or less as slow as were with default DataContract (in my case at least, serialization didn't take huge part of a call). So I end up not using them, and found solution in batching all calls on a large scale and distributing load in time.
ima
+1  A: 

Have you looked at

http://geekswithblogs.net/BVeldhoen/archive/2008/01/26/wcf-latency-test-harness.aspx

It takes into account various bindings, and data sizes.

Kyle Lahnakoski