I'm troubleshooting some communications issues and in the network traces I am occasionally coming across TCP sequence errors. One example I've got is:
- Server to Client: Seq=3174, Len=50
- Client to Server: Ack=3224
- Server to Client: Seq=3224, Len=50
- Client to Server: Ack=3224
- Server to Client: Seq=3274, Len=10
- Client to Server: Ack=3224, SLE=3274, SRE=3284
Packets 4 & 5 are recorded in the trace (which is from a router in between the client and server) at almost exactly the same time so they most likely crossed in transit.
The TCP session has got out of sync with the client missing the last two transmissions from the server. Those two packets should have been retransmitted but they weren't, the next log in the trace is a RST packet from the Client 24 seconds after packet 6.
My question is related to what could be responsible for the failure to retransmit the server data from packets 3 & 5? I would assume that the retransmit would be at the operating system level but is there anyway the application could influence it and stop it being sent? A thread blocking or put to sleep or something like that?