The situation: I was using HttpWebRequest.BeginGetResponse as documented in msdn. I had a timer sending the request every ten seconds. I received xml-structured information, when I tested it.
The result: Being at the customers place and having that tool running I received incomplete (and therefore unparsable) xmls (each about 4KB). I could check in a browser and see it completely (obviously a synchronous request through the browser?!). I used the header information about the content length to size my receiving buffer.
What caused it? I don't know. The data is fairly small. Still I used the ThreadPool.RegisterWaitForSingleObject
approach described at developer fusion to define a timeout, I chose ten seconds as well for the timeout. Maybe that wasn't a smart decision, it probably should be smaller that the timer interval. The thing is, I cannot test it again under those conditions. It was at a production site, where I had not insight to the network setup. The reguests ran just fine at the same time from home.
I'm not very experienced in that field, but what happens when a timer triggers a new request before the response stream has been fully received, because e.g. the timeout time is equal to the timer interval? Any other hints what could be the bottle neck here?