I have a repository I'm migrating to a new subversion server using svnadmin dump and load. I'm doing this in 2 goes, as outlined in the steps below:
- repoX is at revision 100
- I perform an svnadmin dump repoX > repoX.dump
- I perform an svnadmin create repoX on the new server
- I perform an svnadmin load repoX < repoX.dump on the new server
This all works so far, and svnadmin verify says all is ok. Next I try to get the new repo up to date (this is prior to a final cut off, still testing at this stage).
- Next I perform an svnadmin dump repoX -r 101:head --incremental > repoX.dump
- I perform an svnadmin load repoX < repoX.dump on the new server once again
Then I get an error, saying that some path in repoX doesn't exist. So I tried the incremental dump again using:
- svnadmin dump repoX -r 100:head --incremental > repoX.dump
This time it works and it verified ok.
However.... there are other repos, repoY for example. When I try the second mechanism (going to revision 100 in the first dump, then getting an updated dump from revision 100) I get another error, saying directory already exists! Some repositories work one way, some work the other, all of them don't work when I do it the same way.
So what I want to know is, the correct mechanism from dumping to revision 100 (eg), then dumping the rest of the repos to the head in the second sweep. I've spent the morning reading, reading and reading, but I can't even find an example of what I'm doing, even though I know it can be done.
I can't cut over doing a single dump file as we're talking about 150gb worth of data spread across 50 repos, which physically can't be dumped, transferred and loaded in one night. This mechanism is ensuring we can move 95% of the data prior to the final cutover... but it looks like my theory might be flawed.
If you want any more info just ask.
Thanks