I am working on linux. I have a HTTP client which requests some data from the HTTP server. The HTTP client is written in C and HTTP server is written in perl. I want to simulate TCP re-transmission timeouts at the client end.
I assume that closing the socket gracefully would not result in client to re-transmit the requests.
So I tried the following scenario:
Exit the Server as soon as it gets the HTTP GET request. However, I noticed that once the application exits, the socket is still closed gracefully. I see that the server initiates FIN.ACK messages towards the client even though the application has not called "close" on the socket. I have noticed this behaviour on a simple TCP server and client written in C program as well.
Server does not send any response to the client's GET request. In this case I notice that there is still FIN, ACK sent by the server. Seems that in these cases the OS (linux) takes care of closing the socket with the peer. Is there any way to suppress this behaviour (using ioctl or setsockopt options) or any other way to simulate the TCP re-transmission timeouts.
Thanks.