The interaction between Subversion's binary delta algorithms, compression in tracked files and the server's own internal use of compression can be complex.
Here's an example
I took a copy of the an x86 emacs binary (about 10MB, 4MB compressed with gzip) as my "binary file". I wrote a little program which "edits" a binary file by overwriting 4 consecutive bytes at a random position with random data.
I then wrote three scripts to simulate 100 commits in the following three fashions:
the file is compressed with gzip in the repository
For each repetition: we decompress the file, then perform our edit, then recompress it and then check it in.
Final repository size: 9.6 MB
(This was better than I expected until I realized that because of the way gzip works, the bytes before the random edit (half the file, on average) will be identical to those of the previous version, even after compression.)
the file is not compressed in the repository
For each repetition: We simply perform our edit and then check in the changes.
Final repository size: 5.1 MB
the file is imported from scratch every time
For each repetition: we copy the binary (not using svn copy) to a new file, edit this copy, add it and commit the changes. This is equivalent to an import since there is no historical connection to the previous copy of the file.
Final repository size: 403 MB
Just to give you a feel for Subversion's server-side compression, I repeated this test, only this time I compressed the binary files on the client side before adding and committing them each time.
Final repository size: 392 MB
So, whatever subversion is doing, it appears to about as good as gzip.
Your questions make it sound like you're assuming that compression on the client side will help you. It may very well not do so.
In my experience it's only worth doing when:
- The file is large.
- The compression you are using is considerably tighter than what Subversion manages. (e.g. if you're using bzip2 or lzma)
- The file is rarely edited.