views:

716

answers:

5

I'm currently trying to debug a customer's issue with an FTP upload feature in one of our products. The feature allows customers to upload files (< 1MB) to a central FTP server for further processing. The FTP client code was written in-house in VB.NET.

The customer reports that they receive "Connection forcibly closed by remote host" errors when they try to upload files in the range of 300KB to 500KB. However, we tested this in-house with much larger files (relatively speaking), i.e. 3MB and up, and never received this error. We uploaded to the same FTP server that the client connects to using the same FTP logon credentials, the only difference being that we did it from our office.

I know that the TCP protocol has flow-control built-in, so it shouldn't matter how much data is sent in a single Send call, since the protocol will throttle itself accordingly to match the server's internal limits (if I remember correctly...)

Therefore, the only thing I can think is that that an intermediate host between the client and the router is artificially rate-limiting the client and disconnecting it (we send the file data in a loop in 512-byte chunks).

This is the loop that is used to send the data (buffer is a Byte array containing the file data):

            For i = 0 To buffer.Length - 1 Step 512
                mDataSocket.Send(buffer, i, 512, SocketFlags.None)
                OnTransferStatus(i, buffer.Length)
            Next

Is it possible that the customer's ISP (or their own firewall) is imposing an artificial rate-limit on how much data our client code can send within a given period of time? If so, what is the best way to handle this situation? I guess the obvious solution would be to introduce a delay in our send loop, unless there is a way to do this at the socket level.

It seems really odd to me that an ISP would handle a rate-limit violation by killing the client connection. Why wouldn't they just rely on TCP/IP's internal flow-control/throttling mechanism?

A: 

I dont think the ISP would try to kill a 500KB file transfer. Im no expert in either socket thingy or on ISPs... just giving my thoughts on the matter.

Mostlyharmless
It's not a question of the amount of data being sent, it a question of how frequently data is pumped across the wire. I'm wondering if any ISP's are known to artificially impose some kind of rate-limit on top of TCP, i.e. you can only send a max of 100KB/s or the connection will be closed.
Mike Spross
+2  A: 

Do a search for Comcast and BitTorrent. Here's one article.

Mark Ransom
+1. This could explain why the connection is dropped rather than simply throttled down (since there motive seems to be flat-out stop file-sharing from happening at all on their network). I guess I was forgetting how underhanded ISP's can be ;-)
Mike Spross
+1  A: 

Try to isolate the issue:

  • Let the customer upload the same file to a different server. Maybe the problem is with the client's ... FTP client.
  • Get the file from the client and upload it yourself with your client and see if you can repro the issue.

In the end, even if a 3MB file works fine, a 500KB file isn't guaranteed to work, because the issue could be state-depending and happening while ending the file transfer.

xmjx
+1  A: 

Yes, ISPs can impose limits to packets as they see fit (although it is ethically questionable). My ISP for example has no problem in cutting any P2P traffic its hardware manages to sniff out. Its called traffic shaping.

However for FTP traffic this is highly unlikelly, but you never know. The thing is, they never drop your sockets with traffic shaping, they only drop packets. The tcp protocol is handled on each pear side so you can drop all the packets in between and the socket keeps alive. In some instances if one of the computers crashes the socket remains alive if you dont try to use it.

I think you best bet is a bad firewall/proxy configuration on the client side. Better explanations here.

Either that or a faulty or badly configured router or cable on the client installations.

Caerbanog
The article Mark Ransom linked to seems to imply that some ISP's actually reset the TCP connection. So it seems ISP's can do this either by dropping individual packets or by tearing down the connection. And...I agree with the "ethically questionable" aspect.
Mike Spross
+1  A: 

500k is awefully small these days, so I'd be a little surprised if they throttle something that small.

I know you're already chunking your request, but can you determine if any data is transferred? Does the code always fail at the same loop point? Are you able to look at the ftp server logs? What about an entire stack trace? Have you tried contacting the ISP and asking them what policies they have?

That said, assuming that some data makes it through, one thought is that the ISP has traffic shaping and the rules engage after x bytes have been written. What could be happening is at data > x the socket timeout expires before the data is sent, throwing an exception.

Keep in mind ftp clients create another connection for data transfer, but if the server detects the control connection is closed, it will typically kill the data transfer connection. So another thing to check is ensure the control connection is still alive.

Lastly, ftp servers usually support resumable transfers, so if all other remedy's fail, resuming the failed transfer might be the easiest solution.

Robert Paulson
+1 All good points. The command connection does stays open during the transfer, but that's an interesting point nontheless . I don't recall reading that in the RFC, but then again I wrote the FTP client about 4 years ago - my memory is probably a bit fuzzy ;-)
Mike Spross