views:

192

answers:

3

I have a cvs repository, with mostly java code. Each package sits in it's own top level dir, like so, with the sourced laid out in typical java fashion.

$CVSROOT/my.domain.module1/src/my/domain/module1
$CVSROOT/my.domain.module2/src/my.domain/module2
$CVSROOT/my.domain.share1/src/my/domain/share1

This means we can write build scripts that can easily pull any combination of packages out of the repository in order to build a particular shippable piece of software.

So if I checked out my.domain.module1 the build script in that module will then pull in my.domain.share1 as well. This really promotes code resuse.

The approach has strengths and weaknesses - not really interested in that today - just given this type of approach is it possible/sensible to replicate in mercurial or git.

From what I can tell you'd need to either define a whole repository per package, or checkout and commit the whole repository each time!

A: 

Yes - I can exactly this problem - I want my build system to do the configuration management - I just want my version control to version my files. If you move too much of the configuration stuff into the version control system it makes it nigh on impossible to move - what happened to separate of concerns when it comes to building a project!

wibble
Thanks for the solidarity - but not really an answer! Anybody from the DVCS camps able to provide one?
+2  A: 

Regarding DVCS in general, a repo per component is the right size. Since they have the all history for their respective components, having only one repo for any component out there would not scale well.

Git will use submodules, Mercurial Hg will use subrepos.

The idea is to define a super-project (a repo in its own right) which will:

  • have its own files
  • have some of its own sub-directories being to root directory of a submodule components for a given reference (SHA1, tag, label)

If you do some modification from the main project directly in one of those sub-components, you must first commit those subrepos, and then go up in your main project to commit that main project (it won't contains all the data, only its own data, and some pointer to the new submodules references you just committed before)

VonC
Thanks - I looked at Mercurial subrepos docs - was worried that they are not fully baked. Git submodules look overly complex - too easy to make mistakes.It's not clear for the docs whether I can have the same subrepos in different positions from the root. Eg to take the example from above , have share1 a subrepo for module1 AND module2.With cvs it's easy. I guess I'm just going to have to experiment.
For example f I checkout module1 and then through my build file checkout share1 in a subdirectory created for the purpose ( say module1/dependencies ) then it's easy for me to tag module1 and all it's dependencies in one go.
+1  A: 

It sounds like you have a great setup for using ivy. It lets your build system handle the pulldown and compile-against parts of your dependencies, and the source control just tracks point-in-time for modules.

Then in your ivy dependency files you have a clear record of what version of each component the others depended on and can revert/advance easily.

You could also use mercurial sub-repos, but I prefer using a good dependency manager like ivy.

Ry4an
They way you describe Ivy is the approach we take - but with our own ant build system ( predates Ivy by the best part of a decade ). The problem I've got is if we want to continue to take this approach but move from cvs to a newer source code control system, they don't seem to match well. When all the docs of the new source control system take about all the new stuff they have versus cvs then omit to mention what they have thrown away...
I don't see it as anything thrown away. You can still have a centralize workflow if that's what fits your build tools best. We link our ivy dependencies to builds that come out of our continuous integration server (Hudson), which certainly is centralized.
Ry4an
So Ivy manages dependencies at the level of artifacts? ie jars? I want to manage it at the level of source code.
So long as you know what revision is in the artifact why does it matter? Presumably you've got your reasons, but forcing modules to iterop at the interface and link level really helps w/ the decoupling. We have artifacts built for every commit by our CI system, so operating at that level doesn't increase any lag, just enforces good separation.
Ry4an
The problem with build shippable applications from artifacts is that you can end up with overly large jar files - using one class from a package results in the whole jar file being included - unless you have an extra step of unpacking everything and working out the *real* dependencies. Kinda easier to let javac do it at compile time.
I could see that, but we use proguard as a final finishing step and it takes the huge collection of jars used for building and merges them down into the minimal required jar for shipping. Works wonderfully. Even (optionally!) removes methods and variables that aren't used instead of just classes.
Ry4an
I'd rather just build from source - same effect, without having to build jars every commit and then strip them!BTW if you want to run proguard that just leaves out unused classes and does *nothing* else - what's the config recipe?
Bulding from source has the same effect IFF you're recording the tag/node you built from in each sub-repo. Build controlled jars gets you that for free. I get great comfort from knowing I can build Foo 1.1.1 against Bar 1.0.2 with Baz 3.2.2-af783d6dfe and know I have a configuration that passes all tests. With all your code in one repo you can get that too, of course, but it's much harder doing that with source and separate code repos, which is what DVCSs work best with.I think proguard was just using: -dontshrink-dontoptimize-dontobfuscate
Ry4an
Yes - we use one repository so it's easy to tag all checked out and then the build file can be optionally parametrized by tag - so I can pull out the source for exactly what was shipped and if I need to then do a incremental bug fix then it's easy to branch from there if needed.
Yup, that's one common way. A lot of folks are moving to tracked daily artifact builds for the advantages I've listed above. To each their own.
Ry4an