views:

503

answers:

6

Hi,

Our policy when delivering a new version is to create a branch in our VCS and handle it to our QA team. When the latter gives the green light, we tag and release our product. The branch is kept to receive (only) bug fixes so that we can create technical releases. Those bug fixes are subsequently merged on the trunk.

During this time, the trunk sees the main development work, and is potentially subject to refactoring changes.

The issue is that there is a tension between the need to have a stable trunk (so that the merge of bug fixes succeed -- it usually can't if the code has been e.g. extracted to another method, or moved to another class) and the need to refactor it when introducing new features.

The policy in our place is to not do any refactoring before enough time has passed and the branch is stable enough. When this is the case, one can start doing refactoring changes on the trunk, and bug-fixes are to be manually committed on both the trunk and the branch.

But this means that developpers must wait quite some time before committing on the trunk any refactoring change, because this could break the subsequent merge from the branch to the trunk. And having to manually port bugs from the branch to the trunk is painful. It seems to me that this hampers development...

How do you handle this tension?

Thanks.

+10  A: 

This is a real practical problem. It gets worse if you have several versions you need to support and have branched for each. Even worse still if you have a genuine R&D branch too.

My preference was to allow the main trunk to proceed at its normal rate and not to hold on because in an environment where release timings were important commercially I could never argue the case that we should allow the code to stabilise ("what, you mean you released it in an unstable state?").

The key was to make sure that the unit tests that were created for the bug fixes were transitioned across when the bug was migrated into the main branch. If your new code changes are genuinely just re-factoring then the old tests should work equally well. If you changes are such that they are no longer valid then you can't just port you fix in any case and you'll need to have someone think hard about the fix in the new code stream.

After quite a few years managing this sort of problem I concluded that you probably need 4 code streams at a minimum to provide proper support and coverage, and a collection of pretty rigorous processes to manage code across them. It's a bit like the problem of being able to draw any map in 4 colours.

I never found any really good literature on this subject. It will inevitably be linked to your release strategy and the SLAs that you sign with your customers.

Addendum: I should also mention that it was necessary to write the branch merging as specific milestones into the release schedule of the main branch. Don't under-estimate the amount of work that might be entailed in bring your branches together if you have a collection of hard-working developers doing their job implementing features.

Simon
A: 

Where I work, we keep with the refactoring in the main branch. If the merges get tricky, they just have to be dealt with on an ad-hoc basis, they're all do-able, but occasionally take a bit of time.

Scott Langham
A: 

Maybe our problem comes from the fact that we have branches that must have quite a long life (up to 18 months), and there are many fixes that have to be done against them.

Making sure that we only branch from extremely stable code would probably help, but will not be so easy... :(

Xavier Nodet
Erm, this isn't even AN answer, let alone THE answer...
Benjol
+1  A: 

I think the tension can be handled by adding following ingredients to your development process:

  1. Continuous integration
  2. Automated functional tests (I presume you already count with unit tests)
  3. Automated delivery

With continuous integration, every commit implies a build where all unit tests get executed and you are alarmed if anything goes wrong. You start working more with head and you are less prone to branching the code base.

With automated functional tests, you are able to test your application at the click of the button. Generally, since these tests take more time, these are run nightly. With this, the classic role of versioning starts to lose the importance. You do not make your decision on when to release based on the version and its maturity, it’s more of the business decision. If you have implemented unit and functional testing and your team is submitting tested code, head should always in state that can be released. Bugs are continuously discovered and fixed and release delivered but this is not any more cyclical process, it’s the continuous one.

You will probably have two types of detractors, since this implies changing some long rooted practices. First, the paradigm shift of continuous delivery seems counter-intuitive to managers. “Aren’t we risking to ship a major bug?” If you take a look at Linux or Windows distros, this is exactly what they are doing: pushing releases towards customers. And since you count with suite of automated tests, dangers are further diminished.

Next, QA team or department. (Some would argue that the problem is their existence in itself!) They will generally be aversive towards automating tests. It means learning new and sometimes complicated tool. Here, the best is to preach by doing it. Our development team started working on continuous integrations and in the same time writing the suite of functional tests with Selenium. When QA team saw the tool in action, it was difficult to oppose its implementation.

Finally, the thurth is that the process I described is not as simple as addig 3 ingredinents to your development process. It implies a deep change to the way you develop software.

Dan
I agree with your three items, but I don't think they help for my concern... I'm worried about the difficulty of merging changes in a development line that undergoes large refactoring changes: these changes lead to many conflicts... :(
Xavier Nodet
Idea behind continuous integration is to perform refactorings in a small steps and to integrate continuously. This way some of the problems are mitigated. Depending on the design of your code, you still might end up with wide-ranging braking changes. Refactor to reduce dependencies first.
Dan
+4  A: 

Where I work, we create temporary, short-lived (less than day -- a few weeks) working branches for every non-trivial change (feature add or bugfix). Trunk is stable and (ideally) potentially releasable all the time; only done items get merged into it. Everything committed from trunk gets merged into the working branches every day; this can be largely automated (we use Hudson, Ant and Subversion). (This last point because it's usually better to resolve any conflicts sooner than later, of course.)

The current model we use was largely influenced by an excellent article (which I've plugged before) by Henrik Kniberg: Version Control for Multiple Agile Teams.

(In our case, we have two scrum teams working on one codebase, but I've come to think this model can be beneficial even with one team.)

There is some overhead about the extra branching and merging, but not too much, really, once you get used to it and get better with the tools (e.g. svn merge --reintegrate is handy). And no, I do not create a temp branch always, e.g. for smaller, low-risk refactorings (unrelated to the main items currently under work) that can easily be completed with one commit to trunk.

We also maintain an older release branch in which critical bugs are fixed from time to time. Admittedly, there may be manual (sometimes tedious) merging work if some particular part of code has evolved in trunk significantly compared to the branch. (This hopefully becomes less of an issue as we move towards continually releasing increments from trunk (internally), and letting marketing & product management decide when they want to do an external release.)

I'm not sure if this answers your question directly, or if you can apply this in your environment (with the separate QA team and all) - but at least I can say that the tension you describe does not exist for us and we are free to refactor whenever. Good luck!

Jonik
A: 

Maybe Git (or other DVCS) are better at handling merges to updated code thanks to the fact that they (really) manage changes rather than just compare files... As Joel says :

With distributed version control, merges are easy and work fine. So you can actually have a stable branch and a development branch, or create long-lived branches for your QA team where they test things before deployment, or you can create short-lived branches to try out new ideas and see how they work.

Not tried yet, though...

Xavier Nodet
Refactoring changes are often the kind of changes that can't be automatically merged, even by DVCSs. (For example, if one person changes a method signature on one branch, and another person moves the method on a second branch, and someone else fixes something at a different level to remove the need for the method altogether.) Using a DVCS will lessen the problems, but not eliminate them entirely.
Ken