views:

210

answers:

7

Suppose you're working on a project and the time/money budget does not allow 100% coverage of all code/paths.

It then follows that some critical subset of your code needs to be tested. Clearly a 'gut-check' approach can be used to test the system, where intuition and manual analysis can produce some sort of test coverage that will be 'ok'.

However, I'm presuming that there are best practices/approaches/processes that identify critical elements up to some threshold and let you focus your test elements on those blocks.

For example, one popular process for identifying failures in manufacturing is Failure Mode and Effects Analysis. I'm looking for a process(es) to identify critical testing blocks in software.

+3  A: 

Unless you're doing greenfield development using TDD, you are unlikely to get (or want) 100% test coverage. Code coverage is more of a guideline, something to ask "what haven't I tested?"

You may want to look at other metrics, such as cyclomatic complexity. Find the complex areas of your code and test those (then refactor to simplify).

TrueWill
+1 for "what haven't I tested?" - that's how I use the coverage numbers.
Grant Palin
+1  A: 

It depends entirely on the type of software that you are developing. If it is remotely accessible then security testing should be the highest priority. In the case of web applications there are automated tests such as Acunetix or Wapiti which can be used. There are also tools to help generate unit tests for SOAP.

Rook
+8  A: 

100% code coverage is not a desirable goal. See this blog for some reasons.

My best practice is to derive test cases from use cases. Create concrete traceability (I use a UML tool but a spreadsheet will do as well) between the use cases your system is supposed to implement and test cases that proves that it works.

Explicitly identify the most critical use cases. Now look at the test cases they trace to. Do you have many test cases for the critical use cases? Do they cover all aspects of the use case? Do they cover negative and exception cases?

I have found that to be the best formula (and best use of the team's time) for ensuring good coverage.

EDIT:

Simple, contrived example of why 100% code coverage does not guarantee you test 100% of cases. Say CriticalProcess() is supposed to call AppendFile() to append text but instead calls WriteFile() to overwrite text.

[UnitTest]
Cover100Percent()
{
    CriticalProcess(true, false);
    Assert(FileContents("TestFile.txt") == "A is true");

    CriticalProcess(false, true);
    Assert(FileContents("TestFile.txt") == "B is true");

    // You could leave out this test, have 100% code coverage, and not know
    // the app is broken.
    CriticalProcess(true, true);
    Assert(FileContents("TestFile.txt") == "A is trueB is true");
}

void CriticalProcess(bool a, bool b)
{
    if (a)
    {
        WriteFile("TestFile.txt", "A is true");
    }

    if (b)
    {
        WriteFile("TestFile.txt", "B is true");
    }
}
Eric J.
I found some of the responses that argued 100% is desirable to present convincing arguments: e.g., if you have 100%, you know all the code is hit - if you have 90%, there's 10% not being hit and untested. But my original question is working under the awareness that 100% is impossible due to budget/time constraints.
Paul Nathan
@Paul: Even if you hit 100% of your lines of code, you still don't come close to testing all possible execution paths. Added a simple example to my answer to illustrate. You're not focusing your effort on proving the software correctly implements the most important functionality if you're focused on ensuring as many lines of code are touched as possible.
Eric J.
+3  A: 

There are 3 main components which you should be aware:

  • important features - you should know what is more critical. Ask yourself ""How screwed would I (or my customer) be if there's a bug in this component/code snippet?". Your customer could probably help you on determining these kind of priorities. Things that deal directly with money tend to follow in this case
  • frequently used features - The most common use cases should be as bug-free as possible. Nobody cares if there's a bug in a part of the system no one uses
  • most complex features - The developers usually have a good idea of which parts of the code are more likely to contain bugs. You should give special attention to those.

If you have this info, then it probably won't be hard choosing how to distribute your testing resources.

Samuel Carrijo
+3  A: 

False sense of security: You should be always aware of the fact that test coverage can mislead to false sense of security. A great article about this fact can be found in the disco blog. That said relying on the information of "green" indicators allows you to miss untested paths.

Good Indicator for untested paths: On the other hand missing test coverage most times displayed in red always is a great indicator for paths that are not covered. You might check these first because they are easy to spot and allow you to evaluate whether you want to add test coverage here or not.

Code centric approach to identify critical elements: There is a great tooling support availible to help you find the mess and possible gotchas in your code. You might want to have a look at the IntelliJ IDE and its code analysis features or for example at Findbugs, Checkstyle and PMD. A great tool that combines these static code analyzing tools that is available for free is Sonar.

Feature centric approch to identify critical elements: Evaluate your software and break it down into features. Ask yourself questions like: "What features are most important and should be most reliable? Where do we have to take care of the correctness of results? Where would a bug or failure be most destructive to the software?"

Liuh
+1  A: 

Maybe the best hint that a module is insufficiently covered is bug reports against it. Any module you're editing time and again should be well-covered. But cyclomatic complexity correlates pretty well with bug frequency, too - and you can measure that before the bugs show up!

Carl Manaster
+2  A: 

If you have a legacy code-base, a good place to start is:

  • Add a unit test for every bug that you find and fix. The unit test should reproduce the bug, then you fix the code, and use the unit test to verify that it is fixed, and then to be sure in future that it doesn't break again for any reason.

  • Where possible, add tests to major high-level components so that many low-level breakages will still cause a unit test failure (e.g. instead of testing every database acess routine independently, add one test that creates a database, adds 100 users, deletes 50 of them, verifies the result, and drops the database. You won't easily see where the failure is (you'll have to debug to work out why it failed) but at least you know that you have a test that exercises the overall database system and will warn you quickly if anything major goes wrong in that area of the code. Once you have the higher level areas covered, you can worry about delving deeper.

  • Add unit tests for your new code, or when you modify any code.

Over time, this in itself will help you build up coverage in the more important places.

(Bear in mind that if your codebase is working code that has been working for years, then for the most part you don't "need" unit tests to prove that it works. If you just add unit tests to everything, they will pretty much all pass and therefore won't tell you much. Of course, over time as your coverage grows, you may start to detect regressions from those tests, and you will find bugs through the process of adding unit tests for previously untested code, but if you just slog through the code blindly adding unit tests for everything, you'll get a very poor cost-per-bug-fixed ratio)

Jason Williams