The data set is 97984 files in 6766 folders with 2,57 GB. A lot of them are binary files.
For me this does not sound so much. The daily data change rate is in the hundreds of KB on maybe 50 files. But I'm scared that subversion will become extremely slow.
It was never fast anyway and the last time at v1.2 the recommendation was splitting it into multiple repositories. No, I don't like this.
Is there way that I can tell Subversion or any other free open source version control to trust the file modified time/file size to detect file changes and not compare all the files? With this and putting the data on a fast modern SSD it should run fast, say, less then 6 seconds for a complete commit (that's 3x more then getting the summary from the Windows Explorer properties dialog).