views:

4930

answers:

20

I am trying to find an alternative to FTP? It's a single file transfer up to 4gb.

Any suggestions? maybe HTTP? Or should I stick it out with FTP?

  • More info -

We have an app that we distribute to tens of thousands of clients that upload single large files. FTP has proven to be error prone with a single file of that size.

Speed is always a consideration. 'Resume' is a must. Cost shouldn't be an issue - I guess it depends.

+4  A: 

Try rsync

Mez
+5  A: 

I use SCP at work, namely because it has inbuilt encryption and authentication.

You shouldn't really be using FTP for anything that is remotely sensitive.

Simon Johnson
A: 

If you need a free technology and want to be a bit more secure, then you could use SFTP instead. However, encryption introduces an overhead and could thus be slower than normal FTP.

hangy
A: 

My experience is that HTTP clients usually support HTTP 1.1 and so are (due to the added complexity when reading), if anything, slower than FTP. (Bittorrent won't help for a one-off transfer nor in the short term.)

You might want to tune your TCP parameters to make the transfer go faster (which would depend on your operating system). For example, larger TCP buffers can prevent your transfer from going in spurts due to the latency in getting ACKs.

You should probably also check for support for restarting an interrupted transfer. That can be supported by HTTP or FTP, but you need support in both the client and the server.

tye
Probably, your problem with HTTP is bad client or server. When transfering 4Gb file using no chunked encoding, HTTP protocol computing and network overhead is almost zero.
stepancheg
A: 

In such cases I'm sharing taht file on my local ftp/http-server, then log in using ssh on remote server, start screen or something like and download file using wget or something like.

Sergei Stolyarov
+34  A: 

If you want the ability to restart mid-transfer, rsync is ideal; it can be run over SSH, raw sockets, or any other kind of pipeline.

SFTP is also a good tool for the job (and, thanks to its inclusion in modern SSH implementations, widely available).

If you're looking to automate the transfer process, rsync is trivially called from shell; lftp can be used to scriptably run file transfers over SFTP (or FTP, or WebDAV, or a great many other protocols); and the paramiko library is great for writing SFTP clients and servers in Python or Java.

Charles Duffy
+3  A: 

Yes there is, but it's rarely heard of these days. FSP (File Service Protocol) has fallen out of favor, which is a shame because for a low priority transfer of really large files from a single source it's ideal - and it also fits in 'under the radar' nicely too.

http://fsp.sourceforge.net/

and the wiki has some details

http://en.wikipedia.org/wiki/File_Service_Protocol

It's also considerably more tolerant of network errors than FTP - as I recall virtually bomb-proof. I used to use it in preference for downloads from SimTel and the like as (a) my network at that point (mid-90's) was prone to timeout errors and (b) for the large archives all anonymous ftp slots were often full but fsp was always available. This was even to the point where if I switched my PC off in an evening and restarted it in the morning any transfer would reconnect automatically and carry on regardless.

Cruachan
+1  A: 

HTTP might be faster if you manage to do parallel download of different pieces (with HTTP/1.1 only, afaik). Complexity of channel encoding is hardly relevant here, since your computer will likely reach bandwidth limit quite earlier than it will lack processing time.

Bittorrent will not improve speed in a one-to-one setup, but it might ease resuming a failed (or interrupted) transmission, and might catch and fix transmission errors that FTP couldn't (though this is very unlikely).

rsync and scp are indeed more common alternative as they require very little setup to get started.

sylvainulg
+1  A: 

HTTP : have a 2GB limit in some servers. SSH+SFTP : good choice, but need ssh access on foreign server (not rare, but uncommon on hostings) RSYNC : uncommon in hostings, trivial if you have control over the remote host

My choice: a server side script and a local script that chops the file in 10Meg parts, md5 them and sends them, then in the remote, tie the parts together given the md5s are correct.

lms
A: 

SFTP should work. Note that SFTP is NOT just FTP run over ssh. It is a completely new protocol, much improved from FTP. It should be able to handle files over 4gb, as it uses 64-bit ints for file size. And it is more secure and reliable to boot.

Paul Wicks
A: 

Install freenet

setup a darknet

put the file in freenet

send the key (method is up to you. email, snail mail, web-service, whatever)

retrieve on the client.

Some time later: Voila.

The file will be strong-encrypted in transfer, and will be hidden among other traffic. People listening along the path(s) will have a hard time finding where the file starts and end, or that you even sent a file.

Christopher Mahan
+8  A: 

And lets not forget that for a good ad-hoc hack-job to move a single file around, there is always good old netcat:

nc -l -p 8000 >destination.dat

nc destination-machine 8000 <source.dat

:)

Jim T
You can even put a little `gzip`/`gunzip` mix to the process and have on-wire compression
Steve Schnepp
A: 

Kermit protocol ? It's what is used to update the software of the space probes around there.

edomaur
A: 

The nice thing about http is that most sysadmins let its traffic goes in, while they might block other protocols.

If you need to download a file from a server, http is fine as long as you are not transfering sensitive data. Otherwise you can use https.

If you need to upload a file to a server, you can use webdav. Since it's an http extension it is usually not firewalled.

agateau
A: 

Why not distribute through e-mail, your file as attached payload? you can use extreme encryption. Big con is that you'd need to integrate a mail-client on your sw, to timely retrieve msgs (IMAP or POP), decode and cp the attachment to the right place.

... but ppl are talking SFTP, bitorrent and such, so doing complex clients for simple human-tasks is not problematic?

jpinto3912
Email servers often have maximum limits on attachment size, and sending large files through email puts undue load on the infrastructure.
Charles Duffy
+6  A: 

Bittorrent is very reliable for transferring files and just continues after any interruption.

Rob Kam
A: 

"wget" runs from the command line, supports large file HTTP downloads and HTTP resume. TCP seems to be reasonably optimal for bulk transfer.

Einstein
A: 

If you are keen on speed, FTP is really your only option. SFTP and FTPS are more secure, but incur a substantial overhead because of encryption.

Here's a bit of a discussion on whether FTP is dead.

Bruce Blackshaw
A: 

FTP is little complex to set up but works great once its all set up. Since I found Binfer I have stopped using FTP. It works over http, resumes transfers, there are no servers or clients to setup and can send any size.

Luke
A: 

There are several ftp alternatives. FileShare Plus is a web based ftp alternative that some companies are using for enhanced file sharing. It's free for 20 users. Here is the link: http://FileSharePlus.com.

Technical Framework