tags:

views:

91

answers:

9

In designing unit tests, from what I've read you should try to stick to these principles:

  • Isolate your tests from each other
  • Test only a single behavior at a time
  • Make sure the test is repeatable

On the other hand, these features do not always seem to correlate with a good test:

  • High code coverage
  • High performance

Is this a fair approach?

+1  A: 

Performance is generally not a concern with unit tests. High code coverage is. You write individual tests to narrowly test a single function. You write enough (narrow) unit tests to cover most of your functions.

Steve B.
If performance *is* a concern for you, consider mocking out the portions of the process that are not currently being tested (e.g database access or remote server calls.) This has the added benefit of reducing the scope of the test and improving the performance.
Chris Nava
+1  A: 

I would say that individual unit tests more than likely are not going to cover a big piece of code, as that defeats the purpose, so yes, I would agree with your first point.

Now, as for High Performance, this is not a "necessity" I guess, however, when developing a system with hundreds of tests, you want to be doing them as efficient as possible so that you can quickly execute your tests.

Mitchel Sellers
A: 

Upon rereading the question:

On the first point, a suite of unit tests should have high code coverage but not necessarily focus on performance. A single unit test should be rather small in terms of how much code it covers. A cell doesn't cover your body, but a group of them forms skin that does cover most of the human body would be a biological metaphor if you want one.

JB King
A: 

Yes I think it's fair to say that. You can always get high code coverage by creating many different tests.

Magnus Skog
+1  A: 

High code coverage is more an indication of how widely you've tested your code base, ie if you've tested all the code paths possible in your system etc. They give no indication of the quality of individual tests, but is one metric to measure the quality of your unit tests as a whole.

As for high performance, you need to categorize your tests, separating out the tests that touch your database or other high latency services. Keep your performance tests in a separate category as well. Also make sure you keep an eye out for integration (or end to end tests), for instance a test that opens up a web browser, posts and then verifies the response. If you do this, you don't really need to worry about the performance of your tests.

Praveen Angyan
A: 

Both performance & coverage are the goals of the entire test suite. Each individual test should not try to "grab" as much coverage as posdible nor should be concerned with performance.

We do want the whole test suite to run in a resonable time and should cover most of the code functionality but not at the price of writing "ba" unit tests.

Dror Helper
A: 

High code coverage: For an individual unit test, it does not have to cover 100% of the method or methods under test. Eventually, the suite of unit tests need to cover as much of the method or methods under test as you feel is important. Could be 1%, could be 100%, and it is probably different for different sections of code.

I generally try for 70%-80% coverage, and then pay more attention to the trend of coverage than to the actual value.

High performance: More important than the speed of an individual unit test is the time it takes to run all the tests in the suite. If it takes too long, developers may decide not to run them as often. But you often can't afford to spend a lot of time optimizing, especially when it comes to unit tests. You need the entire suite to run fast enough.

It isn't uncommon for a system I'm on to have hundreds of unit tests, and still run in 2 minutes. One really slow test, or a bunch of slightly slow tests, can really slow down the build-test-refactor cycle.

CoverosGene
+1  A: 

Coverage is not a (meaningful) property of individual tests. As you say, one test should cover only a single behavior. Performance, although it is (generally) only significant in the aggregate, is very important, and here's why.

We run tests for lots of reasons. The fewer barriers there are to writing and running them, the more tests we'll have - and the better our code coverage will be. If we don't run tests because it's too much trouble - we don't run tests; we don't discover our bugs; our productivity suffers. So they should be automated (zero trouble). If we don't run tests because it takes time away from coding - or because the time they take distracts us from the problem, jars our concentration: we don't run tests; we don't discover our bugs; our productivity suffers. So they should be fast enough that running them is not a distraction. If our tests hit the file system or the database much, that will slow them down and we'll run them less. So avoid that; abstract away the slow bits. Test early and often. Test lots. If they're slow, you won't. So keep them fast.

Carl Manaster
+1  A: 

Each unit test should concentrate on testing a single behaviour, so the code coverage of one unit test should ideally be very small. Then when you have hundreds and thousands of such very focused tests, their total coverage should be high.

Performance both is not and is important.

Performance is not important in the sense that micro optimizations should not be done. You should first of all concentrate on the readability of the tests. For example, the book Clean Code has an example of tests that verified the state of a temperature alarm. Originally each test had some five asserts checking booleans such as assertTrue(hw.heaterState()), but the asserts were then refactored to one string comparison assertEquals("HBchL", hw.getState()) where uppercase means enabled and lowercase means disabled. The latter code has lower performance because of creating some additional strings and comparing them, but its readability is much better, so it is better test code.

Performance is important in the sense that running all unit tests should be fast, hundreds or thousands of tests per second (I prefer less than 1 ms per test in average). You should be able to run all your unit tests in a matter of seconds. If the tests take so long to run that you hesitate to run them after making a small change, and instead you run them only when going to take more coffee, then they take too long time. If the test suite is slow, you should break and mock dependencies to other components, so that the system under test will be as small as possible. In particular, the unit tests should not use a database, because that will make them hopelessly slow.

In addition to unit tests, there is also need for integration/acceptance tests that test the system as a whole. They have a different role in the development as unit tests, so it is acceptable for the acceptance test suite to be slow (no pun intended). They should be run by the continuous integration server at least once a day.

Esko Luontola