views:

255

answers:

6

Automated tests MUST be fast to reflect real time project state. The idea is that:

  1. after any commit to repository automated build is performed (as fast as it can be done).
  2. if build succeeded automated tests are started. MUST be fast.

This is the best way i know to find out if your changes break anything.

At first it seemed that making a build fast is hard, but we managed to keep it around 100 sec. for a solution of 105(!) projects (MSVS 2008 C#).

Tests appeared to be not that simple (we use NUnit FW). Unit testing is not a big problem. It is integration tests that kills us. And not the fact that they are slower (any ideas on how to make them faster are much appreciated) but the fact that the environment must be set up which is MUCH slower(atm ~1000 sec)!

Our integration tests use web/win services (19 so far) that needs to be redeployed in order to reflect latest changes. That includes restarting services and a lot of HDD R/W activity.

Can anyone share the experience on how environment and the work flow should/can be organized/optimized to fasten automated testing phase. What are the "low level" bottlenecks and workarounds.

P.S. books and broad articles are welcome, but real world working solutions are much more appreciated.

+3  A: 

We use .NET and NUnit, which supports categories (an attribute you can put on a test). Then we take long running tests and put them in a NightlyCategory so that they only get run during nightly builds and not in the continuous builds that we want to run fast.

Lou Franco
There are many ways to avoid running time consuming tests. At the moment we try first to perform any possible optimization and only then start delaying tests, as then you loose an answer to "did my changes brake anything".
Dandikas
This is what we do. Category for test over 30 sec (rule of thumb) are in the "NightBuildTest" category. Other are active all the time.
Daok
A: 

Buildbot: http://buildbot.net/trac I can not recommend this enough if you're doing Continuous Integration (automated testing). With a quick configuration all of our unit tests are run each time there is a commit, and the longer integration tests get run periodically through the day (3 times last I checked, but this can be easily changed).

Mark Roddy
We use CC.net, but I'll take a look at buildbot
Dandikas
+1  A: 

I've put together a presentation on Turbo-Charged Test Suites. The second half is aimed at Perl developers, but the first half might prove useful to you. I don't know enough about your software to know if it's appropriate.

Basically, it deals with techniques for speeding up the database usage in test suites and running tests in a single process to avoid constant reloading of libraries.

Ovid
+3  A: 

There are a number of optimization strategies you can do to improve the throughput of tests, but you need to ask yourself what the goal of this testing is, and why it needs to be fast.

Some tests take time. This is a fact of life. Integration tests usually take time, and you usually have to set up an environment in order to be able to do them. If you set up an environment, you will want to have an environment which is as close to the final production environment as possible.

You have two choices:

  1. Optimize the tests, or the deployment of the tests.
  2. Don't do them as often.

In my experience, it's better to have an integration environment which is correct, and finds bugs, and represents the final production environment adequately. I usually choose option 2 (1).

It's very tempting to say that we'll test everything all of the time, but in realilty you need a strategy.

(1) Except if there are loads of bugs which are only found in integration, in which case, forget everything I said :-)

MatthieuF
We are in a state, when integration test do not take too long, but deploying them is a killer. Thus we try the (1) at the moment.
Dandikas
+1  A: 

I'd suggest having several high level end to end tests, and if any one of those fails, run the 'higher resolution' tests.

Think of doing tech support over the phone...

does your computer work? if yes, done. If no, does your computer turn on at all? ...

For my unit testing, I have a few fast tests like "does my computer work?" if those pass, I don't execute the rest of my suite. If any of those tests fails, I execute the associated suite of lower level tests that give me a higher resolution view into that failure.

My view is that running a comprehensive suite of top level tests should take less than half a second.

This approach gives me both speed and detail.

shapr
Surely the problem with that is if your top-level tests miss some condition which should make the tests fail. To use your analogy, "does your computer work? done" - what if the mouse is broken - it *should* fail the test, but the "the computer works"
dbr
I agree, choose your end to end tests carefully, and run all the detail tests while you sleep. If your detail tests show a bug that your end to end tests do not, see how you can improve your end to end tests. Start somewhere, then improve.
shapr
+1  A: 

the fact that the environment must be set up which is MUCH slower(atm ~1000 sec)!

Well at least you know where to focus... Do you know where that time is being spent?

Obviously any solution is going to depend on the specifics here.

There are three solutions that I've used in this sort of situation:

  1. use more machines. Perhaps you could partition your services onto two machines? Would that let you drop your setup time in 1/2?

  2. use faster machines? In one situation I know of at team cut their integration test executing down from something like 18 hrs to 1 hr by upgrading the hardware (multiple CPUs, fast RAID storage, more RAM, the works). Sure it cost them on the order of $10k USD but it was worth it.

  3. use an in memory database for integration test. Yes I know you'll want to have tests against the real database too, but perhaps you could run the tests initially against an in memory version to get fast feedback.

Jeffrey Fredrick