views:

463

answers:

5

Hi,

I'd like to know something, I know that to make your test easier you should use mock during unit testing to only test the composant you want, without external depedencies.

But at some time you have to bite the bullet, and test classes wich interact with your database/file/network...

So my question is : What do you do to test these classes ? I don't feel that installing a database on my CI server is a good practice, but have you other options ?

Should I create another server with another CI tools, with all externals depedencies ?

Should I run integration test on my CI as often as my unit tests ?

Maybe a full time person should be in charge to test these components manually ? (or in charge to create the test environment and configure the interaction between your class and your external depedency (like editing config files of your application))

I'd like to know how do you do in the real world ?

+4  A: 

Depending on the actual nature of the integration tests I'd recommend using an embedded database engine which is recreated at least once before any run. This enables tests of different commits to work in parallel and provides a well defined starting point for the tests.

Network services - by definition - can also be installed somewhere else.

Always be very careful though, to keep your CI machine separated from any dev or prod environments.

David Schmitt
+3  A: 

The approach I've seen taken most often is to run unit tests immediately on checkin, and to run more lengthy integration tests at fixed intervals (possibly on a different server; that's really up to your preference). I've also seen integration tests split into "short-running" integration tests and "long-running" integration tests, which are run at different intervals (the "short-running" tests run every hour, for example, and the "long-running" tests run overnight).

The real goal of any automated testing is to get feedback to developers as quickly as is feasible. With that in mind, you should run integration tests as often as you possibly can. If there's a wide variance in the run length of your integration tests, you should run the quicker integration tests more often, and the slower integration tests less often. How often you run any set of tests in going to depend on how long it takes all the tests to run, and how disruptive the test runs will be to shorter-running tests (including unit tests).

I realize this doesn't answer your entire question, but I hope it gives you some ideas about the scheduling part.

MattK
+1  A: 

I do not know what kind of platform you're on, but I use Java. Where I work, we create integration tests in JUnit and inject the proper dependencies using a DI container like Spring. They are run against a real data source, both by the developers themselves (normally a small subset) and the CI server.

How often you run the integration tests depends on how long they take to run, in my opinion. Run them as often as you can. Leave the real person out of this, and let him or her run manual system tests in areas that are difficult or too expensive to automate testing for (for instance: spelling, position of different GUI components). Leave the editing of config files to a machine. Where I work, we have system variables (DEV; TEST and so on) set on the computers, and let the app choose a config file based on that.

+5  A: 

I'd like to know how do you do in the real world ?

In the real world there isn't a simple prescription about what to do, but there is one guiding truth: you want to catch mistake/bugs/test failures as soon as possible after they are introduced. Let that be your guide; everything else is technique.

A couple common techniques:

  • Tests running in parallel. This is my preference; I like to have two systems, each running their own instance of CruiseControl* (which I'm a committer for), one running the unit tests with fast feedback (< 5 minutes) while another system runs the integration tests constantly. I like this because it minimizes the delay between when a checkin happens and a system test might catch it. The downside that some people don't like is that you can end up with multiple test failures for the same checkin, both a unit test failure and an integration test failure. I don't find this a major downside in practice.

  • A life-cycle model where system/integration tests run only after unit tests have passed. There are tools like AnthillPro* that are built around this kind of model and the approach is very popular. In their model they take the artifacts that have passed the unit tests, deploy them to a separate staging server, and then run the system/integration tests there.

If you've more questions about this topic I'd recommend the Continuous Integration and Testing Conference (CITCON) and/or the CITCON mailing list.

Jeffrey Fredrick
Wonderfull, CITCON have lots of resources !
Nicolas Dorier
A: 

Jitr is a JUnit Integration Test Runner and it allows your web application integration tests to easily run against a lightweight web container in the same JVM as your tests.

See their site for details: http://www.jitr.org/

uthark