Big thumbs up for robocopy. I use it for doing the sort of things you mention.
For example I'm currently running 5 robocopy sessions on my server where I'm copying about 60GB of files between 3 remote servers, I'm connected to two via a CheckPoint VPN and the other is an Amazon S3 space mapped via JungleDisk.
I'm working with a colleague at the other end of the country. He'll log in to the same servers later tonight and run a similar set of robocopy batch files to download all the changes I'm currently uploading.
The 'killer app' feature is that robocopy will retain file date/time stamps and, by default will ONLY copy files that are different. So you can point it at a huge dir tree and only changed files will be copied.
Here's some useful tips for doing this sort of thing...
/MIR
mirrors a dir tree so will delete as well as add
/R:10
tells robocopy to try 10 times to copy the file before giving up. The default is 1,000,000 times
/LOG+somefilename.log
will append the screen output to somefilename.log, creating it if necessary.
/XD dir1 dir2
will ignore any dirs named dir1 or dir2 in the copy. Wildcards can be used.
/FFT
will use FAT time stamps which are less accurate than NTFS (uses a 2 sec granularity in timestamps). I also find this one useful when copying between Linux file systems and NTFS.
I typically use something like
robocopy d:\workdir y:\workdir /TEE /LOG+:d:\update.log /MIR /R:5
which will mirror (/MIR) d:\workdir with y:\workdir, append a log of what it does to d:\update.log (/LOG+d:\update.log) writing output to both the console and the log file (/TEE), and try each file 5 times before moving on to the next one.
It also works with UNC paths.
If you've got a large collection of files that need syncing over a number of PCs then robocopy is your friend.