views:

50

answers:

3

I have a "Data" directory, that I rsync to a remote NAS periodically via a shell script.

However, I'd like to make this more efficient. I'd like to detect if something has changed in "Data" before running rsync. This is so that I don't wake up the drives on the NAS unnecessarily.

I was thinking of modifying the shell script to get the latest modified time of the files in Data (by using a recursive find), and write that to a file every time Data is rsynced.

Before every sync, the shell script can compare the current timestamp of "Data" with the previous timestamp when "Data" was sync'd. If the current timestamp is newer, then rsync, otherwise do nothing.

My question is, is there a more efficient way to figure out if the "Data" directory is modified since the last rsync? Note that Data has many, many, layers of sub-directories.

tia, rouble

A: 

If I understand correctly, you just want to see if any files have been modified so you can figure out whether to proceed to the rsync portion of your script?

It's a pretty simple task to figure out when the data was last synced, especially if you do this nightly. As soon as you find one file with mtime greater than the time of the last sync, you know you have to proceed to the full rsync.

find has this functionality built in:

# find all files modified in the last 24 hours
find -mtime 1
meagar
A: 

Rsync already does this. There is no on-demand solution that doesn't require checking the mtime and ctime properties of the inodes.

However you could create a daemon that uses inotify to track changes as they occur, and fire rsync at intervals, or whenever you feel sufficient events have occurred to justify calling rsync.

Matt Joiner
A: 

I would use the find command, but do it this way: When the rsync runs, touch a file, like "rsyncranflag". Then you can run

find Data -newer rsyncranflag

That will say definitively whether any files were changed since the last rsync (subject to the accuracy of mtime).

dj_segfault