tags:

views:

32

answers:

1

I'm troubleshooting some communications issues and in the network traces I am occasionally coming across TCP sequence errors. One example I've got is:

  1. Server to Client: Seq=3174, Len=50
  2. Client to Server: Ack=3224
  3. Server to Client: Seq=3224, Len=50
  4. Client to Server: Ack=3224
  5. Server to Client: Seq=3274, Len=10
  6. Client to Server: Ack=3224, SLE=3274, SRE=3284

Packets 4 & 5 are recorded in the trace (which is from a router in between the client and server) at almost exactly the same time so they most likely crossed in transit.

The TCP session has got out of sync with the client missing the last two transmissions from the server. Those two packets should have been retransmitted but they weren't, the next log in the trace is a RST packet from the Client 24 seconds after packet 6.

My question is related to what could be responsible for the failure to retransmit the server data from packets 3 & 5? I would assume that the retransmit would be at the operating system level but is there anyway the application could influence it and stop it being sent? A thread blocking or put to sleep or something like that?

+1  A: 

Only one packet has been lost from server to client - packet 3. Packet 6 contains a selective acknowledgement (SACK) for packet 5, so that got through.

In answer to your specific question, no, application-level issues shouldn't prevent TCP retransmissions.

caf
I didn't think the application got involved but that means I've got some weird things happening on the network that's even trickier to sort out :(. I take it the data from packet 5 wouldn't get passed to the application until packet 3's data was received?
sipwiz
sipwiz: Correct. What you probably need to do next is get packet logs at both ends, rather than one in the middle.
caf