tags:

views:

373

answers:

3

To the question Am I unit testing or integration testing? I have answered, a bit provocative: Do your test and let other people spend time with taxonomy.

For me the distinction between various levels of testing is technically pointless: often the same tools are used, the same skills are needed, the same objective is to be reached: remove software faults. At the same time, I can understand that traditional workflows, which most developers use, need this distinction. I just don't feel at ease with traditional workflows.

I thought my answer would be either strongly downvoted or strongly upvoted. Indeed both occurred, with five upvotes and four downvotes. Even the first comment was sort of hesitating (thank you for your upvote by the way!).

So, my question aims at better understanding what appears a controversy to me and at gathering various points of view about whether or not this separation between various levels of testing is relevant.

Is my opinion wrong? Do other workflows exist which don't emphasize on this separation (maybe agile methods)? What is your experience on the subject?

Precision: I am perfectly aware of the definitions (for those who aren't, see this question). I think I don't need a lesson about software testing. But feel free to provide some background if your answer requires it.

+2  A: 

Performance is typically the reason I segregate "unit" tests from "functional" tests.

Groups of unit tests ought to execute as fast as possible and be able to be run after every compilation.

Groups of functional tests might take a few minutes to execute and get executed prior to checkin, maybe every day or every other day depending on the feature being implemented.

If all of the tests were grouped together, I'd never run any tests until just before checkin which would slow down my overall pace of development.

Alex B
If I understand correctly, you build your test groups according to how fast they are and how many times you run them. I think this is a good point, thank you.
mouviciel
A: 

Definitions from my world:

Unit test - test the obvious paths of the code and that it delivers the expected results.

Function test - throughly examine the definitions of the software and test every path defined, through all allowable ranges. A good time to write regression tests.

System test - test the software in it's system environment, relative to itself. Spawn all the processes you can, explore every internal combination, run it a million times overnight, see what falls out.

Integration test - run it on a typical system setup and see if other software causes a conflict with the tested one.

tkotitan
+1  A: 

I'd have to agree with @Alex B in that you need to differentiate between unit tests and integration tests when writing your tests to make your unit tests run as fast as possible and not have any more dependencies than required to test the code under test. You want unit tests to be run very frequently and the more "integration"-like they are the less they will be run.

In order to make this easier, unit tests usually (or ought to) involve mocking or faking external dependencies. Integration tests intentionally leave these dependencies in because that is the point of the integration test. Do you need to mock/fake every external dependency? I'd say not necessarily if the cost of mocking/faking is high and the value returned is low, that is using the dependency does not add significantly to the time or complexity of the test(s).

Over all, though, I'd say it's best to be pragmatic rather than dogmatic about it, but recognize the differences and avoid intermixing if your integration tests make it too expensive to run your tests frequently.

tvanfosson
So a pragmatic separation would be between frequent fast tests and unfrequent slow or complex tests? I've already encountered unit tests which needed heavy hardware to be run (a switch off sequence observed with a logic analyzer plugged on the CPU bus for instance).
mouviciel