I'm currently trying to debug a customer's issue with an FTP upload feature in one of our products. The feature allows customers to upload files (< 1MB) to a central FTP server for further processing. The FTP client code was written in-house in VB.NET.
The customer reports that they receive "Connection forcibly closed by remote host" errors when they try to upload files in the range of 300KB to 500KB. However, we tested this in-house with much larger files (relatively speaking), i.e. 3MB and up, and never received this error. We uploaded to the same FTP server that the client connects to using the same FTP logon credentials, the only difference being that we did it from our office.
I know that the TCP protocol has flow-control built-in, so it shouldn't matter how much data is sent in a single Send call, since the protocol will throttle itself accordingly to match the server's internal limits (if I remember correctly...)
Therefore, the only thing I can think is that that an intermediate host between the client and the router is artificially rate-limiting the client and disconnecting it (we send the file data in a loop in 512-byte chunks).
This is the loop that is used to send the data (buffer is a Byte array containing the file data):
For i = 0 To buffer.Length - 1 Step 512
mDataSocket.Send(buffer, i, 512, SocketFlags.None)
OnTransferStatus(i, buffer.Length)
Next
Is it possible that the customer's ISP (or their own firewall) is imposing an artificial rate-limit on how much data our client code can send within a given period of time? If so, what is the best way to handle this situation? I guess the obvious solution would be to introduce a delay in our send loop, unless there is a way to do this at the socket level.
It seems really odd to me that an ISP would handle a rate-limit violation by killing the client connection. Why wouldn't they just rely on TCP/IP's internal flow-control/throttling mechanism?