views:

192

answers:

6

Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?

+1  A: 

I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.

Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:

Continuous Integration with CruiseControl

Sean Chambers
A: 

I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.

You should be able to create one big job from the smaller pieces, anyways.

a_hardin
+2  A: 

You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.

http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk

Matt Hinze
A: 

G'day,

As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.

</thebloodyobvious> (-:

cheers, Rob

Rob Wells
A: 

The approach I favour is the following setup (Actually assuming you are in a .NET project):

  • CruiseControl.NET.
  • NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
  • NUnit to run unit tests.
  • NCover to perform code coverage.
  • FXCop for static analysis reports.
  • Subversion for source control.
  • CCTray or similar on all dev boxes to get notification of builds and failures etc.

On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.

What I do in these cases is create three builds (or maybe two):

  • A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
  • A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
  • An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.

The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.

Shaun Austin
+1  A: 

We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.

For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.

If something goes wrong in any of those steps it is pretty easy to diagnose.

My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:

  1. Build Plugin Pieces
    1. Compile for Mac
    2. Compile for PC
    3. Compile for Linux
  2. Make final Plugins
  3. Run Plugin tests
  4. Build intermediate IDE (We have to bootstrap building)
  5. Build final IDE
  6. Run IDE tests
Nathan Black