tags:

views:

88

answers:

1

I am trying to transfer a large (~3GB) file between two unix machines.

I can use scp or rsync, but sometimes the transfer is corrupted. (I have to check manually.) I can split the file into pieces and transfer them and then checksum and then recombine, but this is tedious.

Is there a single command to correctly transfer a large file between two Unix machines? I want it to automatically checksum both copies, and keep redoing the transfer (or pieces thereof) until it gets all bytes across the wire correctly.

A: 

Sorry, but scp should not corrupt files. It's depending on tcp/IP which takes care of correct data transfer. Maybe you should check for bad ram or other problems on your servers/clients.

Patrick Cornelissen