views:

178

answers:

4

I'm trying to make a case against automated checkins to version control. My group at work has written some system build tools around CFEngine, and now they think these tools should automatically do checkins of things like SSH host keys.

Now, as a programmer, my initial gut reaction is that nothing should be calling "svn up" and "svn ci" aside from a human. In a recent case, the .rNNNN merged versions of a bunch of files broke the tools, which is what started this discussion.

Now, the guy designing the tools has basically admitted he's using SVN in order to sync files around, and that he could basically replace all this with an NFS mount. He even said he would wrap "svn diff" into "make diff" because it seemed better than all of us knowing how SVN works.

So... I'm asking the crowd here to help me make a good argument for NOT having Makefiles, shell scripts, etc, wrap Subversion commands, when Subversion is basically being used to synchronize files on different machines.

Here's my list, so far:

  1. We aren't really versioning this data, so it shouldn't go in svn.
  2. We've said it could be replaced by an NFS mount, so why don't we do that.
  3. Homegrown tools are now wrapping SVN, and software is always going to have bugs, therefore our SVN revisions are now going to have messes made of a revision when we encounter bugs.

... please discuss/help me make this case, or tell me why you disagree!

+5  A: 

SVN isn't a bad tool to use to synchronise files on machines! If I want a bunch of machines to have the exact same version of file then having them in subversion and being able to check them out is a godsend. Yeah, you could use tools such as rsync or have NFS mounts to keep them up-to-date but at least subversion allows you to store all revisions and roll-back/forward when you want.

One thing I will say though, is having machines automatically update from the trunk is probably a bad idea when those files could break your system, they should update from a tag. That way, you can check things in and maintain revision history TEST them and then apply a tag that will sync the files on other machines when they update.

I understand your concerns for having these tools auto-commit because you perhaps feel there should be some sort of human validation required but for me, removing human interaction removes human error from the process which is what I want from this type of system.

The human aspect should come into things when you are confirming all is working before setting a production tag on the svn tree.

In summary, your process is fine, blindly allowing an automated process to push files to an environment where they could break things is not.

Neil Trodden
The problem I'm having is that the tools we have end up checking in only half the files, because of conflicts from an update. Then the state of the world is broken, like you say.
Martin
Well, that's fine - these things happen. I think you are on the right lines here, but perhaps need to take a different approach. Think of it like this, there's nothing wrong with committing, it's when you check-out into a production environment without any testing that there is a problem.
Neil Trodden
+1  A: 

It's another example of the Old shoe v's the glass bottle debate. In this instance the NFS mount may be the way to go, our nightly build commits versioning changes, and thats it.

Your SVN repository is what you use to help version and build your code. If what you're doing jeopardises this in any way, THEN DON'T DO IT.

If SVN is absolutely, positively the best way to do this, then create a separate repository and use that, leave the critical repository alone.

Binary Worrier
+2  A: 

Actually SVN is better than NFS. At least it provides an atomically consistent global view (ie. you won't sync a half committed view of the files). I would argue against development automated commits because it does not allow for a peer review process, but for administration jobs SVN is quite useful. My 2c.

Remus Rusanu
A: 

Only humans should commit, but I see no reason of forbidding automated checkouts and updates. The only thing is to ensure humans commit the working and tested code to the place from where the automated updates are being made.

NFS is kinda cool but if the NFS server breaks, You are in trouble. You can try using something like GlusterFS to have multiple copies of data (but don't try to ls a glusterfs directory, as it's O(number of files) ).

Neil Trodden posted an excellent remark on tags in this thread, it'd solve Your problem I suppose.

Reef
I believe a word is missing in the first sentence. IMHO, it should be "but I see no reason of FORBIDDING automated checkouts and updates".
bortzmeyer
Thanks for spotting that, fixed.
Reef