We have an application server which have been observed sending headers with TCP window size 0 at times when the network had congestion (at a client's site).
We would like to know if it is Indy or the underlying Windows layer that is responsible for adjusting the TCP window size down from the nominal 64K in adaptation to the available throughput.
And we would be able to act upon it becoming 0 (nothing gets send, users wait => no good).
So, any info, link, pointer to Indy code are welcome...
Disclaimer: I'm not a network specialist. Please keep the answer understandable for the average me ;-)
Note: it's Indy9/D2007 on Windows Server 2003 SP2.
More gory details:
The TCP zero window cases happen on the middle tier talking to the DB server.
It happens at the same moments when end users complain of slowdowns in the client application (that's what triggered the network investigation).
2 major Network issues causing bottlenecks have been identified.
The TCP zero window happened when there was network congestion, but may or may not be caused by it.
We want to know when that happen and have a way to do something (logging at least) in our code.
So the core question is who sets the window size to 0 and where?
Where to hook (in Indy?) to know when that condition occurs?