views:

448

answers:

8

Possible Duplicates:
Why is git better than Subversion?

I've already read a lot (not enough to get the perfect picture though) about versioning systems, and the obvious conclusion is that GIT is simply the best. Or Bazaar maybe. Or Mercurial. But if so it was, then nobody would be using SVN, but they still do. Why? I myself have no own opinion on what v.c.s. is generally the best yet because of lack of experience with them. Could you share your thoughts?

+5  A: 

People are still using Subversion because it's designed using the first paradigm to come for version control systems (centralized). Switching to distributed (and change-based instead of version-based) can take some time to get accustomed to (as you can see in Joel's experience), and so many teams decide not to use it due to resistance to change.

Mercurial tooling is quite mature, comparable to SVN in my opinion.

Kenji Kina
Subversion's paradigm was not the first one to come. This paradigm was inherited from CVS, which implemented the client-server model. Before that there was the SCCS and RCS local shared repository model.
Juliano
That paradigm is still centralized.
Kenji Kina
+8  A: 

SVN is established and mature, with equally mature tooling.

sylvanaar
+2  A: 

While svn is established/mature, I have to admit, Kenji Kina is right: it's living in the past with a version control model that is outdated. I've only used svn, but after reading/watching Linus and Joel talk about DVCS, it sounds like a brilliant idea. I think Perforce does something similar.

This doesn't mean svn is bad (it works rather well!), but it is easier to manage your code and have more frequent commits with a DVCS. Because if you break something, its easy to undo. If you branch, its easy to merge. In general, they fixed a lot of the major headaches most have with Subversion.

If you've never used another version control system before, and are cool with deploying it yourself, and write code (and not documents or pictures or things that you don't need textual stuff/diffing for), do yourself a favor and look into DVCS. Then you won't need to be re-educated.

sheepsimulator
it sounds a brilliant idea... I've seen so much marketing material for truly poor products, I never believe the hype anymore - only after I've actually used something and understood its weak points. DVCS are good, but solve different problems to CVCS and are not always so suitable.
gbjbaanb
@gbjbaanb - That can be true. I just have VCS-envy. :) lol
sheepsimulator
+2  A: 

Subversion has an advantage when a repository contains lots of binary data, which don't delta-compress well. A Subversion checkout grabs the head only, but git clones the entire history, which can weigh in at multiple gigabytes. Yes, this is nice for airplane mode, but fetching the initial clone can take hours.

Greg Bacon
If you don't want to fetch all the previous history of a git repository, you can do a shallow clone (with `git clone --depth=$depth`). You can't use this incomplete clone for further cloning, pushing/pulling, etc., but it's enough to be able to make edits with local revision control and send these commits as patches for upstream inclusion.
Vineet
The inability to push or pull makes that feature nearly worthless to all the environments I've worked in. Nobody wants to send patches by hand, or deal with trying to collate them all.
Zed
+2  A: 

Aside from the other things people are mentioning, (large quantities of binary data, mature, stable, etc) is that svn is built on a classic hierarchical model that is fundamentally different from patch/change-based revisions.

Our company made the decision to stay with SVN because this model fit the way that we handle our release cycle and branching. We see the direct progression of versions as a boon, not a bane. Updates are pushed to a centralized repository, and certain revisions considered "stable" are checked out to a live environment. At any time, it is instantly clear what the state of each environment is to all involved. (yes, this is possible with git.) Even the management who know nothing about revision control or software development can say: "We liked how you had it when it was at version 2547".

On the other hand, I should mention that I use darcs and git for projects that I and my friends and co-FOSS'ers work on together, as the distributed, patch-based model works for us. We can ad-hoc move through the timeline of the project and cherry-pick all kinds of changes.

Really the advantage of SVN, my company, is its strong hierarchy and accessibility to non-programmers who are already familiar with concepts of "logging in" and "downloading".

sleepynate
+8  A: 

I'm currently maintaining a version control service for a U.S. research institution. We're not only supporting SVN in addition to Git and Mercurial, but also CVS.

SVN's "killer feature" among our users is narrow clones. You can make a checkout of just one subdirectory deep in a heirarchy, download only the files related to that directory, and still be able to make commits. Git very recently gave a similar, but not quite as useful variation on this feature called sparse checkouts (see also http://stackoverflow.com/questions/2336580/sparse-checkout-in-git-1-7-0). This lets you filter your working tree, but still forces you to download the entire history of the entire project, which can be prohibitive even when large binaries aren't involved. Mind you, disk is cheap, and if you absorb the hit of the initial clone in advance subsequent pulls are quick enough, but this doesn't help people that went on a trip before they realized they needed to clone, and in any case even Git's sparse checkouts won't let you start your working tree five levels down, so it looks a bit ugly.

In addition, users find authz files easier to write than Git commit hooks, are more comfortable with the SVN syntax and methodology than any DVCS, and perhaps most importantly of all, already have many thousands of commits worth of history in SVN. Experiments in migrating large Subversion repositories to Git or Mercurial have provided mixed results, and these are scientists trying to get work done on their own projects, not donating their time to development of a DVCS.

CVS still has a following for a similar reason. Imagine, as a Git user, having sparse checkouts that also allow you to arbitrarily remap where files in the branch show up in your working tree, using a format that is versioned along with the repository and is distributed with every usual pull, that allows you to write definitions that can have groups that can include other groups, and that only pulls down the files necessary for filesystem placement on a clone. That's straightforward in CVS modules, and impossible in every DVCS. For all the sins of CVS (and believe me, we're quite aware of them, and go out of our way to discourage new CVS projects unless they absolutely can't live without modules), it's impossible to convince a group using that feature to migrate to another version control system.

DVCS software has brought some awesome innovations, but they're also missing things that some developers take for granted. Make sure you know in advance what your requirements are before choosing one.

Zed
SVN "narrow clones" is called "sparse directories", or occasionally as "shallow checkouts". Its an awesome feature.
gbjbaanb
+4  A: 

Talking about the company I work for — the biggest reason for using SVN is being able to keep huge, proprietary format, binary files under version control. Specifically, libraries of thousands of CAD files. In this instance, it does make sense to have the VCS be file-based like SVN is, rather than textual-information-based like Git.

Putting aside whether or not you can do shallow "checkouts" with Git, or how well it stores binary data, the fact is that it's designed to track lines of textual information floating around a tree of source code. As well as it does this, that model is not suited to tracking libraries of binary data.

Honestly though, that's about the only solid reason I would recommend it :P

detly