tags:

views:

148

answers:

7

I am a development manager on a project with a painfully low unit test code coverage and we are definately feeling the weight of the "techincal debt" in the legacy code in our system.

My question is if anyone uses code coverage as a milestone or development threshold that prevents the project from moving to the next sprint until the code coverage reaches a specific level? What is the "best practice" for using the code coverage metric?

+1  A: 

In general: if you practice Scrum (or any other agile methodology), you should follow the time-boxing principle and avoid extending/delaying your sprint.

In particular: code coverage metric alone is not enough to estimate the test/readiness status of your application. More sophisticated combination of metrics should be used (refer to books on software testing).

Alexey Kalmykov
+5  A: 

I think that using code coverage as a blocker is not the way to go. The reason is that having a nice coverage is not the primary objective and that that it can turn into a goal itself. It is pretty easy to just "run stuff" to get the metric up instead of actually test it.

So, in my experience, the most important thing is that you actually do something while running the code. In other words, the important thing is that your tests test and not just run code.

But by any means, use code coverage as a metric and celebrate appropriately when it increases :-)

Martin Wickman
+6  A: 

Code coverage is a very relative thing. First of all because code coverage alone tells you nothing about the quality of your code or your unit tests. Secondly, sometimes it's easy to get 90% coverage with only a few tests, but sometimes it's very hard to even get to 50%. This is especially true with legacy projects which very often aren't designed to aid unit testing (to avoid external dependencies for example).

If you really want to use it as a milestone my advice is to take some important classes of your code, for example those who hold a lot of business logic, and see whether it's easy to achieve a high code coverage % on it. If this is the case, be sure the code coverage of such classes always stays up to par.

My experience tells me it takes up a lot of time to get high code coverage on legacy classes, and this isn't always worth the investment.

I hope this helps!

Gerrie Schenck
"Cherry Picking" specific classes that should be tested better sounds like good advice. I think I'll just add a task to each sprint to improve the testing of specific classes and the result will be better code coverage overall. Thanks Gerrie!
Mark Ewer
+1  A: 

I think the code coverage metric is a tad coarse-grained to be used as such. If you limited it to specific areas of the codebase it might be a bit better. But then, are you getting 80% by testing properties or that one monster method that causes you the most problems?

I wouldn't use it as a crutch.

Will
+1  A: 

A high code coverage metric does not guarantee code quality. From "The Meaning of 100% Test Coverage"

What Does 100% Coverage NOT Mean?

Correctness. While having 100% coverage is a strong statement about the level of testing that went into a piece of code, on its own, it can not guarantee that the code being tested is completely error-free.

One of the main advantages of creating unit tests as part of Test Driven Development is to steer the code into a more test-able state.

Trying to add tests over existing code post-hoc can lead to huge test fixtures which need to set up dozens of dependencies - the code was originally written to be tested as part of an application, not in a unit test.

Revisiting the application's design and refactoring the code to be testable, to me, would be a worthy goal. This may significantly decrease technical debt while increasing the test-ability and maintain-ability of the code base. It could also be highly time consuming and not worth it from a business standpoint.

James Kolpack
+1  A: 

If you're dealing with "legacy code" using coverage as a blocker will cause you pain. Require any new code written/refactored to be under coverage and your % of legacy code under test should naturally increase over time and you will be less likely to get the artificial feeling of safety that will come if you use coverage as a blocker.

Chuck
+2  A: 

For legacy systems setting up a barrier like this is probably too painful for the entire codebase, particularly if the codebase is non-trivially large. You can do more harm than good, particularly since paying off this technical debt likely involves additional periods of instability during the inevitable refactorings that would be involved in catching up the old code, which is likely not very test-friendly.

I would recommend targeted refactorings with a coverage threshold set for new code only. If one area of the codebase is painful and shows too much risk to add new code, then block out some spike time for redesign and refactoring. All fixed bugs should have failing tests written first, and new features should target a high level of coverage, ~90% or higher. (The last 10% of the fabled "100%" coverage is often very costly, as it likely involves GUI layers and integration stuffs. This is a controversial opinion, but I've found it holds true for the most part. Be happy with 90% or higher on new code.)

The CI server for the project I'm on currently will fail the build if coverage falls below a threshold, but that's on a project that started off with good TDD practice. Large legacy apps with lots of technical debt tend to come booby-trapped with unstable areas and political consequences that don't need any more trauma than they already have. Set a goal of gradual improvement over time rather than a one-time catch up.

Dave Sims