I am having a problem using HttpWebRequest against a HTTP daemon on an embedded device. The problem appears to be that there is enough of a delay between the http headers being written to the socket stream, and the http payload (a POST), that the socket releases what's in the socket buffer to the server. This results in the HTTP request being split over two packets (fragmentation).
This is perfectly valid, of course, however the server the other end doesn't cope with it if the packets are split by more than about 1.8ms. So I am wondering if there are any realistic ways to control this (on the client).
There do not appear to be any properties on HttpWebRequest that give this level of control over the socket used for the send, and one can't appear to access the socket itself (ie via reflection) because it is only created during the send, and released afterwards (as part of the outbound http connection pooling stuff). The BufferWriteStream property just buffers the body content within the webrequest (so it's still available for redirects etc...), and doesn't appear to affect the way the overall request is written to the socket.
So what to do?
(I'm really trying to avoid having to re-write the HTTP client from the socket up)
One option might be to write some kind of proxy that the HttpWebRequest sends to (maybe via the ServicePoint), and in that implementation buffer the entire TCP request. But that seems like a lot of hard work.
It also works fine when I'm running Fidder (for the same reason) but that's not really an option in our production environment...
[ps: I know it's definately the interval between the fragmented packets that's the problem, because I knocked up a socket-level test where I explicitly controlled the fragmentation using a NoDelay socket]