As I mentioned in "What are the Git limits", Git is not made to manage big files (or big binary files for that matter).
Git would be needed if you needed to:
- know what has actually changed within a file. But for the directory-level, the other answers are better (Unison or rsynch)
- keep a close proximity (i.e. "same referential") between your development data, and those large resources. Having only one referential would help, but then you would need a fork of Git, like git-bigfiles to efficiently manage them.
Note: still using Git, you can try this approach
Unfortunately, rsync
isn't really perfect for our purposes either.
- First of all, it isn't really a version control system. If you want to store multiple revisions of the file, you have to make multiple copies, which is wasteful, or
xdelta
them, which is tedious (and potentially slow to reassemble, and makes it hard to prune intermediate versions), or check them into git, which will still melt down because your files are too big.
- Plus rsync really can't handle file renames properly - at all.
Okay, what about another idea: let's split the file into chunks, and check each of those blocks into git separately.
Then git's delta compression won't have too much to chew on at a time, and we only have to send modified blocks...
Based on gzip --rsyncable
, with a POC available in this Git repo.