views:

217

answers:

3

This is perhaps not a question unique to Mercurial, but that's the SCM that I've been using most lately.

I work on multiple projects and tend to copy source code for libraries or utilities from a previous project to get a leg up on starting a new project. The problem comes in when I want to merge all the changes I made in my latest project, back into a "master" copy of those shared library files.

Since the files stored in disjoint repositories will have distinct version histories, Mercurial won't be able to perform an intelligent merge if I just copy the files back to the master repo (or even between two independent projects).

I'm looking for an easy way to preserve the change history so I can merge library files back to the master with a minimum of external record keeping (which is one of the reasons I'm using SVN less as merges require remembering when copies were made across branches).

Perhaps I need to do a bit more up-front organization of my repository to prepare for a future merge back to a common master.

+3  A: 

Three solutions, pick your favorite:

  1. Put all projects into one repository.
  2. Make a separate repository for shared code and different repository for each project.
  3. One repository with Subrepositories: http://mercurial.selenic.com/wiki/subrepos, keep all common code in one subrepo and different subrepos for each project.

Copying actual files between repositories with no common ancestors will never be optimal as history is not preserved.

xyld
Thanks for the quick response.1. Not practical - the members (people) using these different repositories are distinct.2. I'd like to have a single repo encompass all the code needed for each project. This seems to rule out 2.3. I saw this "experimental" feature - are people really using it? Is it stable/recommended?This seems such a common problem, I would expect there are some standard practices that I'm missing.
mckoss
Subrepos are used in production.
tonfa
It was experimental in 1.3, the current version is 1.4.x
jae
A: 

use the transplant extension

just somebody
Yuck. Then you have the same changes all over the place with different hashids (due to their different parents).
Ry4an
A: 

I'd recommend against your "copy the sourcecode" practice but use binary distribution for your custom libraries instead. These binaries are checked in along the sourcecode.

  • reduces build-time
  • no overhead of tracking changes in all copies of the library
  • you can use different versions of the same library in different projects.

EDIT: And for the issue with "common" or "toolbox" libaries in general, read this post from ayende.

Johannes Rudolph
This is the best soln as far as I'm concerned. Have your continuous integration build system create a new release of the common code daily (or after each change or whatever) and have your projects which use the libraries download/use them in the builds scripts straight from the build-box.Internally I/we use 'ivy' (we're a java shop) to simply list our internal and external dependencies, and it saves us all sorts of trouble.
Ry4an
I don't think this works well for me as it couples the projects too closely. Note that before distributing libraries you need to run all unit tests across all projects which consume the libraries in order to safely do this.
mckoss
Sorry, I don't get it. Why does it couple the projects? One of my point was being able to use different versions for different projects. Running all unit-tests when distributing? What's different if using source distribution?
Johannes Rudolph
I'd recommend against checking in binaries to the same repository as source code. Especially with Mercurial, you're adding binary cruft for everyone who downloads the repo.
rq