views:

1178

answers:

1

I wrote a WCF service that should transform any size of files, using the Streamed TransferMode in NetTcpBinding, and System.IO.Stream object.

When running performance test, i found significant performance problem. Then I decided to test it with Buffered TransferMode and saw that performance is two times faster!

Because my service should transfer big files, i just can't stay in Buffered TransferMode because of memory management overhead on big files at the server and client side together.

Why is Streamed TransferMode slower than the Buffered TransferMode? What can i do to make Stremed performance better?

+3  A: 

How big are the chunks you are streaming? You might experiment with varying chunk sizes, and varying strategies.
Also, consider using Asynch IO to fill the buffers to be transferred, or after transfer.

What I mean is, if your streaming algorithm is serial, like so:

1. Fill a chunk
2. send the chunk
3. get confirmation
4. more chunks?  Go to step 1

...then you have a lot of unnecessary delay. If you can fill chunks and send chunks in parallel, then you'll be able to reduce waiting. Async IO is one way to do that. You would have two parallel workstreams happening. Conceptually, it might look like this:

Filling Chunks                              Sending Chunks
  1. call BeginRead                           1. get next chunk
  2. wait for callback                        2. send it
  3. more to read? yes -> go to step 1        3. await confirmation
  4. done                                     4. more? go to step 1

But using Async IO, these could actually be driven by the same thread.

Keep this in mind:

alt text

Did you read MS's article on the topic of large data streaming in WCF?

Cheeso
My test has not other IO than WCF. The transferred Stream contains random bytes than created with Random.NextBytes() method. anyway, exactly the same code works faster just with configuration change to "Buffered".
DxCK
Replace "IO" in my comments with "Random.NextBytes()" and the principle is still valid. In your case, do you perform a Random.NextBytes() on one small portion at a time? If so, consider using asynchrony to eliminate the serialization of operations.
Cheeso
I performed Random.NextBytes() on various sizes: 64k, 350k, 512k, 1MB, 2MB, 5MB, 10MB, then created a MemoryStream and gave the stream to WCF. WCF do all the rest (reads etc.) The other side (server or client) is reading the stream, there i saw no performance difference between various buffer sizes.The results is similar with all those various sizes: Buffered is 2 time faster than Streamed.
DxCK
it sure seems like a contrived, unrealistic scenario. The explanation for the perf difference could be all due to the context switching that WCF does when chunking a stream, versus what it does when reading your stream into one large buffer.
Cheeso