Each unit test should concentrate on testing a single behaviour, so the code coverage of one unit test should ideally be very small. Then when you have hundreds and thousands of such very focused tests, their total coverage should be high.
Performance both is not and is important.
Performance is not important in the sense that micro optimizations should not be done. You should first of all concentrate on the readability of the tests. For example, the book Clean Code has an example of tests that verified the state of a temperature alarm. Originally each test had some five asserts checking booleans such as assertTrue(hw.heaterState())
, but the asserts were then refactored to one string comparison assertEquals("HBchL", hw.getState())
where uppercase means enabled and lowercase means disabled. The latter code has lower performance because of creating some additional strings and comparing them, but its readability is much better, so it is better test code.
Performance is important in the sense that running all unit tests should be fast, hundreds or thousands of tests per second (I prefer less than 1 ms per test in average). You should be able to run all your unit tests in a matter of seconds. If the tests take so long to run that you hesitate to run them after making a small change, and instead you run them only when going to take more coffee, then they take too long time. If the test suite is slow, you should break and mock dependencies to other components, so that the system under test will be as small as possible. In particular, the unit tests should not use a database, because that will make them hopelessly slow.
In addition to unit tests, there is also need for integration/acceptance tests that test the system as a whole. They have a different role in the development as unit tests, so it is acceptable for the acceptance test suite to be slow (no pun intended). They should be run by the continuous integration server at least once a day.