views:

204

answers:

2
+2  Q: 

Build Quality

We have 3 branches {Dev,Test,Release} and will have continuous integration set up for each branch. We want to be able to assign build qualities to each branch i.e. Dev - Ready for test...

Has anyone any experience with this that can offer any advice/best practice approach?

We are using TFS 2008 and we are aware that it has Build Qualities built in. It is just when to apply a quality and what kind of qualities people use is what we are looking for.

Thanks

:)

A: 

Assessing the quality of a build in a deterministic and reproducible way is definitely challenging. I suggest the following:

  1. If you are set up to do automated regression testing then all those tests should pass.
  2. Developers should integration test each of their changes using an official Dev build newly installed on an official and clean test rig and give their personal stamp of approval.

When these two items are satisfied for a particular Dev build you can be reasonably certain that promoting this build to Test will not be wasting the time of your QA team.

Jared
+1  A: 

Your goal here is to get the highest quality possible in each branch, balanced against the burden of verifying that level of quality.

Allowing quality to drop in any branch is always harmful. Don't think you can let the Dev branch go to hell and then fix it up before merging. It doesn't work well, for two reasons:

  • Recovering is harder than you expect. When a branch is badly broken, you don't know how broken it really is. That's because each issue hides others. It's also hard to make any progress on any issue because you'll run in to other problems along the way.

  • Letting quality drop saves you nothing. People sometimes say "quality, cost, schedule - pick any 2" or something like that. The false assumption here is that you "save" by allowing quality to slip. The problem is that as soon as quality drops, so does "velocity" - the speed at which you get work done. The good news is that keeping quality high doesn't really cost extra, and everyone enjoys working with a high-quality code base.

The compromise you have to make is on how much time you spend verifying quality.

If you do Test Driven Development well, you will end up with a comprehensive set of extremely fast, reliable unit tests. Because of those qualities, you can reasonably require developers to run them before checking in, and run them regularly in each branch, and require that they pass 100% at all times. You can also keep refactoring as you go, which lets you keep velocity high over the life of the project.

Similarly, if you write automated integration / customer tests well, so they run quickly and reliably, then you can require that they be run often, as well, and always pass.

On the other hand, if your automated tests are flaky, if they run slowly, or if you regularly operate with "known failures", then you will have to back off on how often people must run them, and you'll spend a lot of time working through these issues. It sucks. Don't go there.

Worst case, most of your tests are not automated. You can't run them often, because people are really slow at these things. Your non-release branch quality will suffer, as will the merging speed and development velocity.

Jay Bazuzi