My organisation has started a continuous integration project to automate the build of their large public-facing Web site.
By "large", I mean 30+ REST services, content and integration for an external CMS, and several ASP.NET front-ends. The systems are written with a mix of Java and C# deployed to a mix of Linux and Windows Server boxes.
We work following an agile process with seven cross-disciplinary teams, all running to a weekly sprint cycle.
We have automated build and deployment of each of the individual services, but now our challenge is to automate the (currently manual) integration and final acceptance testing.
My concerns are:
What happens when a service changes its contract and its consumers update their code, but the initial service further changes its contract? Will we /ever/ get a stable build?
Dependency checking is a nightmare in the manual system, and I can't see it getting better in an automated system. (We use Maven with Nexus in the Java world, with plans to use Ivy; we are attempting to squeeze the .NET code into this with interesting results.)
How deep should our tests be? How often should they run?