tags:

views:

265

answers:

5

The classic descriptions of agile development have releasable code at the end of an iteration. If there is further testing and validation that has to happen to create a releasable product, how do you integrate that into the process?

A: 

Automated testing after each automated build gets you at least part of the way.

Matt Dillard
I second the use of Build Events for Testing and Documentation.
Penguinix
A: 

Add system testing to your sprint backlog (in Scrum) or the equivalent.

Ditto user documentation.

catfood
+3  A: 

What stops you from making your own process? If you find something can be better... just do it. If it works, persevere with it.. if not try something else. There is no set-in-stone process if you want agility.
The term that is more frequently used is 'shippable' code at the end of every iteration. Which means that you can give it to the end-user (as a bunch of DLLs to copy off a share or personally deliver a CD/DVD) and he will obtain value from using it. Such code has passed all unit tests (developers) and acceptance tests (customers/QA/Analysts) that were deemed necessary for it to be "DONE!" Acceptance tests are end-to-end customer scenario simulations.

I'm not sure what you mean by 'further testing and validation'.. I can think of other 'pre-release' activities

  • certain activities like "Training Conferences" and related content creation.
  • Demos or Deploying to Beta sites for a month before release if customer deployments are rare or infeasible to do frequently.
  • Prospective Clients / Experts / Services getting a hands-on sneak-peek at the new product they have been hearing about.

You just stack it at the end of your last iteration end-point (If you are particularly pessimistic like me.. take historical averages.. if you release early. Yay!) So if the business has decided that Iteration#14 delimits a good set of features that can be a release.. It is just 'Add n weeks' after end of Iteration#14.. no complex math or uncertainity at that point. The key point being that if you have been engaging the stakeholders/customers regularly, incorporating feedback and maintained acceptable level of quality, there should be no last minute surprises.

If need be, you can even do a rolling start.. i.e. the training team starts work as the dev team enters Iteration#13. That gives them a month assuming 2 week iteration.. and hopefully you wouldn't have a ton of features entering in the last iteration.. So at max 2 weeks after Iteration#14 and subject to all celestial/organization alignments, you should be having a release and a well deserved break.

Gishu
I worked with a release process that required a long stress test of a high-reliability product. The stress tests included very large volumes of data and multiple overlapping error conditions. The stress test would run for several days. That was the sort of "further testing" I was thinking of.
Rachel
A: 

The execution of system testing is usually too slow to tightly integrate into agile development. (There are exceptions to this, e.g. a well engineered suit of browser tests can run not much slower than typical unit tests.)

One way of integration is to have an overnight build or continuous build, that runs all the time, and can take several hours to build and run all the tests. If a build passes all the tests (unit test + system tests), it becomes releasable, you can deliver that binary or the snapshot of source code. The idea is to have x versions of your binaries/code snapshots, verify them asynchronously, and deliver the green builds. This should work both with automated system tests as well as manual ones.

Jiayao Yu
+1  A: 

First recognize that the width/breadth of the testing you speak of increases as the project proceeds and the software gains scope and/or complexity. Trying to put this effort into an iteration does not work after one or two iterations because of this. The feel-good rule for iterations is a constant level of work in each one, as determined by project velocity.

Solutions to this then can take one of two roads: with or without automation. Automation at the higher test levels would reduce the effort to run the tests, making the work fit inside the iteration again since each iteration would only focus on the incremental scope/complexity increases. This isn't always achievable in all project contexts, even if that is what we want. Over-valuing high-level test automation is a pitfall that you should take seriously, in other words, avoid under-valuing what a reasonably-experienced exploratory tester brings to the table.

Without automation, the problem shifts to one based on test management. Parallel, time-shifted testing iterations is one candidate solution. For example, you could choose to establish a testing backlog for system testing tasks that is managed with the same cadence as the development iterations but is delayed, or time-shifted, by as much as one full iteration duration. This enables the testers to work holistically on new releases in their own sandbox and to their own priorities.

I would advocate that the testing iteration backlogs are built in collaboration with the developers, as I would that the developers iteration backlogs are built in collaboration with the testers. I would also advocate a test team that has automation experience so that they can automate the tedium and work in a more exploratory fashion. Their portfolio of automated tests should increase with each iteration. They should also have access to developer unit tests and be able to run them on releases in the testing sandbox.

Working out-of-phase like this doesn't make the increasing test scope/complexity problem go away but it does provide a mechanism for managing that complexity since the team is creating backlog items, adjusting priorities, automating some, creating checklists, etc. based on what they collectively think they should do next. Chances are they will hit the big items.

Preserving the ability for the testers to work holistically and to evolve their understanding and to share their knowledge about the system through automated tests, all seem to be worth striving for.

Adam Geras