views:

88

answers:

2

My organisation has started a continuous integration project to automate the build of their large public-facing Web site.

By "large", I mean 30+ REST services, content and integration for an external CMS, and several ASP.NET front-ends. The systems are written with a mix of Java and C# deployed to a mix of Linux and Windows Server boxes.

We work following an agile process with seven cross-disciplinary teams, all running to a weekly sprint cycle.

We have automated build and deployment of each of the individual services, but now our challenge is to automate the (currently manual) integration and final acceptance testing.

My concerns are:

  • What happens when a service changes its contract and its consumers update their code, but the initial service further changes its contract? Will we /ever/ get a stable build?

  • Dependency checking is a nightmare in the manual system, and I can't see it getting better in an automated system. (We use Maven with Nexus in the Java world, with plans to use Ivy; we are attempting to squeeze the .NET code into this with interesting results.)

  • How deep should our tests be? How often should they run?

+3  A: 

What happens when a service changes its contract and its consumers update their code, but the initial service further changes its contract? Will we /ever/ get a stable build?

It sounds to me that in addition to looking at continuous integration, you need to be looking at how you are managing your source control system. If you have different teams working on the web service and its consumers, that work could be done in a feature branch. Once the changes to the web service contract were checked in to the feature branch, the consumers of that service could be updated, and then once tests passed on that feature branch, it could be merged in to the trunk.

Tests should be run automatically every time a check in is done to trunk, and if they don't pass, the first priority should be to fix whatever broke them.

What exactly are the issues with the dependencies? Whether you are using Maven or Ivy, once you have the dependencies defined for your projects things should be pretty smooth. Continuous integration won't hurt here once you get a repeatable build working - it will help by pointing out more quickly when things are getting out of synch.

mattjames
My suspicion would be that they are not properly shifting version numbers when changing contracts or using version numbers when specifying dependencies.
Jim Rush
A: 

I think you'd benefit quite a lot from tests that flex the basic functionality of the app and are likely to break when service contract changes break the service's customers.

These tests (or at least a 'fast' subset of them) should be run every time you deploy your website to an integration test environment. The full set would run at least nightly.

I think you need to view the website as super-project. If someone changes a service and breaks the customers, it will cause the deployment of the website to be marked as failed. With an aggregated change log across all the projects, identifying the service and developer responsible should be relatively easy.

When you deploy, you'll usually deploy "the website" which is effectively calling the deployment process on each of the included services, content, etc. Or perhaps just the changed bits.

Basically, what this gets to is that as an organization, you make a shift to requiring that services are stable enough that they can be integrated in with everyone else's work. If that isn't possible, they get their own branch and everyone works against the previous stable version and will have integrating with a new version of the service as a high priority story in a later sprint. Hopefully the teams want to avoid that and leave backwards compatible services available.

EricMinick