tags:

views:

93

answers:

6

I used TDD as a development style on some projects in the past two years, but I always get stuck on the same point: how can I test the integration of the various parts of my program?

What I am currently doing is writing a testcase per class (this is my rule of thumb: a "unit" is a class, and each class has one or more testcases). I try to resolve dependencies by using mocks and stubs and this works really well as each class can be tested independently. After some coding, all important classes are tested. I then "wire" them together using an IoC container. And here I am stuck: How to test if the wiring was successfull and the objects interact the way I want?

An example: Think of a web application. There is a controller class which takes an array of ids, uses a repository to fetch the records based on these ids and then iterates over the records and writes them as a string to an outfile.

To make it simple, there would be three classes: Controller, Repository, OutfileWriter. Each of them is tested in isolation.

What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic? Is this what is called an "integration test"? In a book i recently read about Unit Testing it seemed to me that integration testing was more of an anti-pattern?

+3  A: 

What you describe is indeed integration testing (more or less). And no, it is not an antipattern, but a necessary part of the sw development lifecycle.

Any reasonably complicated program is more than the sum of its parts. So however well you unit test it, you still have not much clue about whether the whole system is going to work as expected.

There are several aspects of why it is so:

  • unit tests are performed in an isolated environment, so they can't say anything about how the parts of the program are working together in real life
  • the "unit tester hat" easily limits one's view, so there are whole classes of factors which the developers simply don't recognize as something that needs to be tested*
  • even if they do, there are things which can't be reasonably tested in unit tests - e.g. how do you test whether your app server survives under high load, or if the DB connection goes down in the middle of a request?

* One example I just read from Luke Hohmann's book Beyond Software Architecture: in an app which applied strong antipiracy defense by creating and maintaining a "snapshot" of the IDs of HW components in the actual machine, the developers had the code very well covered with unit tests. Then QA managed to crash the app in 10 minutes by trying it out on a machine without a network card. As it turned out, since the developers were working on Macs, they took it for granted that the machine has a network card whose MAC address can be incorporated into the snapshot...

Péter Török
"Any reasonably complicated program is more than the sum of its parts." - that is a really important observation I think!
Max
i always thought Mars Climate Orbiter served a better example [ http://www.viswiki.com/en/Mars_Climate_Orbiter ]
johnny g
@johnny, that is indeed an excellent example, which did not come to my mind. Although from what I've read, the problem was a deeper process/communication issue rather than simply lack of testing - especially taking into account that you can't field test space missions :-)
Péter Török
@Max, it is, and - just to be clear about it - it is not mine, although I can't recall the source right now.
Péter Török
A: 

What I would do in order to test the "real" application: making the http request (either manually or automated) with some ids from the database and then look in the filesystem if the file was written. Of course this process could be automated, but still: doesn´t that duplicate the test-logic?

Maybe you are duplicated code, but you are not duplicating efforts. Unit tests and integrations tests serve two different purposes, and usually both purposes are desired in the SDLC. If possible factor out code used for both unit/integration tests into a common library. I would also try to have separate projects for your unit/integration tests b/c your unit tests should be ran separately (fast and no dependencies). Your integration tests will be more brittle and break often so you probably will have a different policy for running/maintaining those tests.

Is this what is called an "integration test"?

Yes indeed it is.

RyBolt
+2  A: 

IMO, and I have no literature to back me on this, but the key difference between our various forms of testing is scope,

  • Unit testing is testing isolated pieces of functionality [typically a method or stateful class]
  • Integration testing is testing the interaction of two or more dependent pieces [typically a service and consumer, or even a database connection, or connection to some other remote service]
  • System integration testing is testing of a system end to end [a special case of integration testing]

If you are familiar with unit testing, then it should come as no surprise that there is no such thing as a perfect or 'magic-bullet' test. Integration and system integration testing is very much like unit testing, in that each is a suite of tests set to verify a certain kind of behavior.

For each test, you set the scope which then dictates the input and expected output. You then execute the test, and evaluate the actual to the expected.

In practice, you may have a good idea how the system works, and so writing typical positive and negative path tests will come naturally. However, for any application of sufficient complexity, it is unreasonable to expect total coverage of every possible scenario.

Unfortunately, this means unexpected scenarios will crop up in Quality Assurance [QA], PreProduction [PP], and Production [Prod] cycles. At which point, your attempts to replicate these scenarios in dev should make their way into your integration and system integration suites as automated tests.

Hope this helps, :)


ps: pet-peeve #1: managers or devs calling integration and system integration tests "unit tests" simply because nUnit or MsTest was used to automate it ...

johnny g
For what it's worth, Freeman and Pryce prefer that "Integration Tests" answer the question "Does our code work against code we can't change?" Not sure I like their terminology choice, but it's a separate idea from "do our parts hook together" that ought to have a name.
VoiceOfUnreason
A: 

In an integration test, just as in a unit test you need to validate what happened in the test. In your example you specified an OutfileWriter, You would need some mechanism to verify that the file and data is good. You really want to automate this so you might want to have a:

Class OutFilevalidator {
    function isCorrect(fName, dataList) {
       // open file read data and
       // validation logic
}
Gutzofter
A: 

You might review "Taming the Beast", a presentation by Markus Clermont and John Thomas about automated testing of AJAX applications. YouTube Video

Very rough summary of a relevant piece: you want to use the smallest testing technique you can for any specific verification. Spelling the same idea another way, you are trying to minimize the time required to run all of the tests, without sacrificing any information.

The larger tests, therefore are mostly about making sure that the plumbing is right - is Tab A actually in slot A, rather than slot B; do both components agree that length is measured in meters, rather than feet, and so on.

There's going to be duplication in which code paths are executed, and possibly you will reuse some of the setup and verification code, but I wouldn't normally expect your integration tests to include the same level of combinatoric explosion that would happen at a unit level.

VoiceOfUnreason
A: 

Driving your TDD with BDD would cover most of this for you. You can use Cucumber / SpecFlow, with WatiR / WatiN. For each feature it has one or more scenarios, and you work on one scenario (behaviour) at a time, and when it passes, you move onto the next scenario until the feature is complete.

To complete a scenario, you have to use TDD to drive the code necessary to make each step in the current scenario pass. The scenarios are agnostic to your back end implementation, however they verify that your implementation works; if there is something that isn't working in the web app for that feature, the behaviour needs to be in a scenario.

You can of course use integration testing, as others pointed out.

Sean B