I've heard a few places that one of the main ways distributed version control systems shine, is much better merging than traditional tools like SVN. Is this actually due to inherent differences in how the two systems work, or do specific DVCS implementations like Git/Mercurial just have cleverer merging algorithms than SVN?
SVN tracks files while GIT tracks content. it is clever enough to track a block of code that was refactored from one class/file to another. They use two complete different approaches to tracking your source.
I still use SVN heavily but am very pleased with the few times i've used GIT.
A nice read if you have the time:
http://plasmasturm.org/log/487/
it is a difference caused by the way revisions are stored. svn logically stores the file state at different points in time (though using deltas). but git and most other dvcs stores changesets.
Historically, Subversion has only been able to perform a straight two-way merge because it's didn't store any merge information. This involves taking a set of changes and applying them to a tree. Even with merge information, this is still the most commonly-used merge strategy.
Git uses a 3-way merge algorithm by default, which involves finding a common ancestor to the heads being merged and making use of the knowledge that exists on both sides of the merge. This allows Git to be more intelligent in avoiding conflicts.
Git also has some sophisticated rename finding code, which also helps. It doesn't store changesets or store any tracking information -- it just stores the state of the files at each commit and uses heuristics to locate renames and code movements as required (the on-disk storage is more complicated than this, but the interface it presents to the logic layer exposes no tracking).
Just read an article on Joel's blog(sadly his last one). This one is about Mercurial, but it actually talks about advantages of Distributed VC systems such as Git.
With distributed version control, the distributed part is actually not the most interesting part. The interesting part is that these systems think in terms of changes, not in terms of versions.
Read the article here.
The claim of why merging is better in a DVCS than in Subversion was largely based on how branching and merge worked in Subversion a while ago. Subversion prior to 1.5.0 didn't store any information about when branches were merged, thus when you wanted to merge you had to specify which range of revisions that had to be merged.
So why did Subversion merges suck?
Ponder this example:
1 2 4 6 8
trunk o-->o-->o---->o---->o
\
\ 3 5 7
b1 +->o---->o---->o
When we want to merge b1's changes into the trunk we'd issue the following command, while standing on a folder that has trunk checked out:
svn merge -r 3:7 {link to branch b1}
… which will attempt to merge the changes from b1
into your local working directory. And then you commit the changes after you resolve any conflicts and tested the result. When you commit the revision tree would look like this:
1 2 4 6 8 9
trunk o-->o-->o---->o---->o-->o "the merge commit is at r9"
\
\ 3 5 7
b1 +->o---->o---->o
However this way of specifying ranges of revisions gets quickly out of hand when the version tree grows as subversion didn't have any meta data on when and what revisions got merged together. Ponder on what happens later:
12 14
trunk …-->o-------->o
"Okay, so when did we merge last time?"
13 15
b1 …----->o-------->o
This is largely an issue by the repository design that Subversion has, in order to create a branch you need to create a new virtual directory in the repository which will house a copy of the trunk but it doesn't store any information regarding when and what things got merged back in. That will lead to nasty merge conflicts at times. What was even worse is that Subversion used two-way merging by default, which has some crippling limitations in automatic merging when two branch heads are not compared with their common ancestor.
To mitigate this Subversion now stores meta data for branch and merge. That would solve all problems right?
And oh, by the way, Subversion still sucks…
On a centralized system, like subversion, virtual directories suck. Why? Because everyone has access to view them… even the garbage experimental ones. Branching is good if you want to experiment but you don't want to see everyones' and their aunts experimentation. This is serious cognitive noise. The more branches you add, the more crap you'll get to see.
The more public branches you have in a repository the harder it will be to keep track of all the different branches. So the question you'll have is if the branch is still in development or if it is really dead which is hard to tell in any centralized version control system.
Most of the time, from what I've seen, an organization will default to use one big branch anyway. Which is a shame because that in turn will be difficult to keep track of testing and release versions, and whatever else good comes from branching.
So why is DVCS, such as Git and Mercurial, better than Subversion at branching and merging?
There is a very simple reason why: branching is a first-class concept. There are no virtual directories by design and branches are hard objects in DVCS which it needs to be such in order to work simply with synchronization of repositories (i.e. push and pull).
The first thing you do when you work with a DVCS is to clone repositories (git's clone
and hg's clone
). Cloning is conceptually the same thing as creating a branch in version control. Some call this forking, but that's just the same thing. In fact every user is running their own repository which means you have a per-user branching going on.
The version structure is not a tree, but rather a graph instead. More specifically a directed acyclic graph (DAG, meaning a graph that doesn't have any cycles). You really don't need to dwell into the specifics of a DAG other than each commit has one or more parent references (which what the commit was based on). So the following graphs will show the arrows between revisions in reverse because of this.
A very simple example of merging would be this; imagine a central repository called origin
and a user, Alice, cloning the repository to her machine.
a… b… c…
origin o<---o<---o
^master
|
| clone
v
a… b… c…
alice o<---o<---o
^master
^origin/master
What happens during a clone is that every revision is copied to Alice exactly as they were (which is validated by the uniquely identifiable hash-id's), and marks where the origin's branches are at.
Alice then works on her repo, committing in her own repository and decides to push her changes:
a… b… c…
origin o<---o<---o
^ master
"what'll happen after a push?"
a… b… c… d… e…
alice o<---o<---o<---o<---o
^master
^origin/master
The solution is rather simple, the only thing that the origin
repository needs to do is to take in all the new revisions and move it's branch to the newest revision (which git calls "fast-forward"):
a… b… c… d… e…
origin o<---o<---o<---o<---o
^ master
a… b… c… d… e…
alice o<---o<---o<---o<---o
^master
^origin/master
The use case, which I illustrated above, doesn't even need to merge anything. So the issue really isn't with merging algorithms since three-way merge algorithm is pretty much the same between all version control systems. The issue is more about structure than anything.
So how about you show me an example that has a real merge?
Admittedly the above example is a very simple use case, so lets do a much more twisted one albeit a more common one. Remember that origin
started out with three revisions? Well, the guy who did them, lets call him Bob, has been working on his own and made a commit on his own repository:
a… b… c… f…
bob o<---o<---o<---o
^ master
^ origin/master
"can Bob push his changes?"
a… b… c… d… e…
origin o<---o<---o<---o<---o
^ master
Now Bob can't push his changes directly to the origin
repository. How the system detects this is by checking if Bob's revisions directly descents from origin
's, which in this case doesn't. Any attempt to push will result into the system saying something akin to "Uh... I'm afraid can't let you do that Bob."
So Bob has to pull in the changes first and then merge. This is an automated two-step process both in git and hg. First Bob has to fetch the new revisions, which will copy them as they are from the origin
repository. We can now see that the graph diverges:
v master
a… b… c… f…
bob o<---o<---o<---o
^
| d… e…
+----o<---o
^ origin/master
a… b… c… d… e…
origin o<---o<---o<---o<---o
^ master
The second step of the pull process is to merge the diverging tips and make a commit of the result:
v master
a… b… c… f… 1…
bob o<---o<---o<---o<-------o
^ |
| d… e… |
+----o<---o<--+
^ origin/master
Hopefully the merge won't run into conflicts, but if you anticipate them it's good to atleast do this pull process manually (with git's fetch
and merge
; or hg's pull
and merge
). What later needs to be done is to push in those changes again to origin
, which will result into a fast-forward merge since the merge commit is a direct descendant of the latest in the origin
repository:
v origin/master
v master
a… b… c… f… 1…
bob o<---o<---o<---o<-------o
^ |
| d… e… |
+----o<---o<--+
v master
a… b… c… f… 1…
origin o<---o<---o<---o<-------o
^ |
| d… e… |
+----o<---o<--+
There is another option to merge in git and hg, called rebase, which'll move Bob's changes to after the newest changes. Since I don't want this answer to be any more verbose I'll let you read the git or mercurial docs about that instead.
As an exercise for the reader, try drawing out how it'll work out with another user involved. It is similarly done as the example above with Bob. Merging between repositories is easier than what you'd think because all the revisions/commits are uniquely identifiable.
There is also the issue of sending patches between each developer, that was a huge problem in Subversion which is mitigated in git and hg by uniquely identifiable revisions. Once someone has merged his changes (i.e. made a merge commit) and sends it for everyone else in the team to consume by either pushing to a central repository or sending patches then they don't have to worry about the merge, because it already happened. Martin Fowler calls this way of working promiscuous integration.
Because the structure is different from Subversion, by instead employing a DAG, it enables branching and merging to be done in an easier manner not only for the system but for the user as well.