views:

420

answers:

6

we know that code coverage is a poor metric to use when gauging the quality of test code. we also know that testing the language/framework is a waste of time.

on the other hand, what metrics can we use to identify quality tests? are there any best practices or rules of thumbs that you've learned to help you identify and write higher quality tests?

+2  A: 

write tests that verify the base functionality and the individual use-cases of the software's intent. Then write tests to check edge cases and verify expected exceptions.

in other words, write good unit tests from a customer perspective, and forget about metrics for test code. no metrics will tell you if your test code is good, only functioning software tells you when your test code is good.

Steven A. Lowe
+1  A: 

My rules of thumb:

  1. Cover even simpler test cases in your test plan (don't risk leaving the most used functionality untested)
  2. Trace the corresponding requirement near each test case
  3. As Joel says, have a separate team that does testing
friol
+1  A: 

I'd disagree that code coverage isn't a useful metric. If you don't have 100% code coverage, that at least indicates areas that need more tests.

In general, though - once you get adequate statement coverage, the next logical place to go is in writing tests that are either designed to directly verify the requirements that the code was written to meet, or that are intended to stress the edge-cases. Neither of these will fall naturally out of anything you can easily measure directly.

Mark Bessey
Is a test for a property getter that only does a return foo; needed? What about hundreds of them? Do you really think that code should be covered by tests?
Sergio Acosta
Note that I didn't say one test per method. What I said was that missing coverage indicates functionality that's not being tested. If that getter method is going to be used somewhere in the system, then it ought to be used in (one or more of) the tests, as well.
Mark Bessey
@Sergio Acosta: If property tests generated automatically, I don't see a problem in testing property getters and setters. The problem lies in when you are writing tests by hand. You'll probably have better things to test than getters and setters.
Alfred Myers
+4  A: 

Make sure it's easy and quick to write tests. Then write lots of them.

I've found that it's very hard to predict in advance which tests will be the ones which end up failing either now, or a long way down the line. I tend to take a scatter-gun approach, trying to hit corner cases if I can think of them.

Also, don't be afraid of writing bigger tests which test a bunch of things together. Of course if that test fails it might take longer to figure out what went wrong, but often problems only arise once you start gluing things together.

Chris Jefferson
+9  A: 
  1. Make sure your tests are independent of each other. A test shouldn't depend on the execution or results of some other test.
  2. Make sure each test has clearly defined entry criteria, test steps and exit criteria.
  3. Set up a Requirements Verification Traceability Matrix (RVTM). Each test should verify one or more requirement.
  4. Make sure your tests are identifiable. Establish a simple naming or labeling convention and stick to it. Reference the test indentifier when logging defects.
  5. Treat your tests like you treat your code. Have a testware development process that mirrors your software development process. Tests should have peer reviews, be under version control, have change control procedures, etc.
  6. Categorize and organize your tests. Make it easy to find and run a test, or suite of tests, as needed.
  7. Make your tests as succinct as possible. This makes them easier to run, and automate. It's better to run lots of little tests than one large test.
  8. When a test fails, make it easy to see why the test failed.
Patrick Cuff
that pretty much sums it up
Omar Kooheji
+2  A: 

I think Use case prove very useful to get the best test coverage. If you have your functionality in terms of use case it be easily converted into different test scenarios to cover positive , negative and exceptions. The use case also states the prerequisites and data prep if any for the same which proves very handy while writing test cases.

Chanakya