tags:

views:

146

answers:

3

The situation: I was using HttpWebRequest.BeginGetResponse as documented in msdn. I had a timer sending the request every ten seconds. I received xml-structured information, when I tested it.

The result: Being at the customers place and having that tool running I received incomplete (and therefore unparsable) xmls (each about 4KB). I could check in a browser and see it completely (obviously a synchronous request through the browser?!). I used the header information about the content length to size my receiving buffer.

What caused it? I don't know. The data is fairly small. Still I used the ThreadPool.RegisterWaitForSingleObject approach described at developer fusion to define a timeout, I chose ten seconds as well for the timeout. Maybe that wasn't a smart decision, it probably should be smaller that the timer interval. The thing is, I cannot test it again under those conditions. It was at a production site, where I had not insight to the network setup. The reguests ran just fine at the same time from home.

I'm not very experienced in that field, but what happens when a timer triggers a new request before the response stream has been fully received, because e.g. the timeout time is equal to the timer interval? Any other hints what could be the bottle neck here?

+1  A: 

The solution is simple. Only start the timer after you've finished processing the response.

John Saunders
I guess that would be all I need. That answered what I meant by saying if I need to take care of it. I just couldn't believe that I couldn't receive 4KB in 10 secs.
rdoubleui
But then also I the timer would be useless to control a certain interval. Maybe I shouldn't force my application into those intervals, as it depends on the connection quality..
rdoubleui
A: 

if it was a different server you are connecting to, then the response from the server could also be 'chunked'. I read somewhere that httpwebrequest has a bug where on chunked servers, it doesn't return the full file

If this is the case, make sure the server doesn't have 'chunked mode' enabled for http traffic.

Or if this is out of your reach, do the request yourself using a normal socket, send the http request, and get the complete result back.

Before going in this route, first make sure that this chunked mode is the issue here

R

Toad
The chunked mode would also be an option, I'll try to find out more on that. Although it must be somehow linked to the traffic on that network, because as I said, it worked from another pc at home at the same time. I'll give feedback once I found something out.
rdoubleui
+1  A: 

How are you receiving data? Are you reading data through a stream? And are you using the contentsize returned as an input parameter to Stream.Read? A feature of Stream.Read that is not completely obvious is that it is not guaranteed to return the amount of data that you requested. When you call the following function

public abstract int Read(byte[] buffer, int offset, int count )

it will return how much data was actually read. So you may ask it to read 1000 and it return 400, then there is still 600 bytes left to read. That means that you have to continue calling Read until it returns 0 (which means that there is no more data in the stream).

I would also say that you should not use the content length header information to size your buffer. Instead you should create a dynamically sized buffer (e.g. by using a MemoryStream object) and read from the response stream until it returns 0. At least, that is how I would do it. Then your solution will continue to work, if the server changes implementation so it no longer sends that response header. Or even better, since you are loading XML, create an XmlDocument, and ask it to load directly from the Http response stream.

Pete
Thank you for that hint, I indeed seem to have misused the Stream.Read method.
rdoubleui