views:

377

answers:

11

The other day we had a hard discussion between different developers and project leads, about code coverage tools and the use of the corresponding reports.

  • Do you use code coverage in your projects and if so, why not?
  • Is code coverage a fixed part of your builds or continous integration or do you just use it from time to time?
  • How do you deal with the numbers derived from the reports?
+4  A: 

We use code coverage to verify that we aren't missing big parts in our testing efforts. Once a milestone or so we run a full coverage report and spend a few days analyzing the results, adding test coverage for areas we missed.

We don't run it every build because I don't know that we would analyze it on a regular enough basis to justify that.

We analyze the reports for large blocks of unhit code. We've found this to be the most efficient use. In the past we would try to hit a particular code coverage target but after some point, the returns become very diminishing. Instead, it's better to use code coverage as a tool to make sure you didn't forget anything.

Steve Rowe
+3  A: 

1) Yes we do use code coverage

2) Yes it is part of the CI build (why wouldn't it be?)

3) The important part - we don't look for 100% coverage. What we do look for is buggy/complex code, that's easy to find from your unit tests, and the Devs/Leads will know the delicate parts of the system. We make sure the coverage of such code areas is good, and increases with time, not decreases as people hack in more fixes without the requisite tests.

MrTelly
+1  A: 

I like to measure code coverage on any non-trivial project. As has been mentioned, try not to get too caught up in achieving an arbitrary/magical percentage. There are better metrics, such as riskiness based on complexity, coverage by package/namespace, etc.

Take a look at this sample Clover dashboard for similar ideas.

Jeremy Ross
+1  A: 

We do it in a build, and we see that it should not drop below some value, like 85%. I also do automatic Top 10 Largest Not-covered methods, to know what to start covering.

Andrey Shchekin
A: 

Many teams switching to Agile/XP use code coverage as an indirect way of gauging the ROI of their test automation efforts.

I think of it as an experiment - there's an hypothesis that "if we start writing unit tests, our code coverage will improve" - and it makes sense to collect the corresponding observation automatically, via CI, report it in a graph etc.

You use the results to detect rough spots: if the trend toward more coverage levels off at some point, for instance, you might stop to ask what's going on. Perhaps the team has trouble writing tests that are relevant.

Morendil
A: 

We use code coverage to assure that we have no major holes in our tests, and it's run nightly in our CI.

Since we also have a full set of selenium-web tests that run all the way through the stack we also do an additional coverage trick:

We set up the web-application with coverage running. Then we run the full automated test battery of selenium tests. Some of these are smoke tests only.

When the full suite of tests has been run, we can identify suspected dead code simply by looking at the coverage and inspecting code. This is really nice when working on large projects, because you can have big branches of dead code after some time.

We don't really have any fixed metrics on how often we do this, but it's all set up to run with a keypress or two.

krosenvold
A: 

We do use code coverage, it is integrated in our nightly build. There are several tools to analyze the coverage data, commonly they report

  1. statement coverage
  2. branch coverage
  3. MC/DC coverage

We expect to reach + 90% statement and branch coverage. MC/DC coverage on the other hand gives broader sense for test team. For the uncovered code, we expect justification records by the way.

kokeksibir
+2  A: 

Code coverage tells you how big your "bug catching" net is, but it doesn't tell you how big the holes are in your net.

Use it as an indicator to gauge your testing efforts but not as an absolute metric.

It is possible to write code that will give you 100% coverage and does not test anything at all.

Hibri
A: 

I find it depends on the code itself. I won't repeat Joel's statements from SO podcast #38, but the upshot is 'try to be pragmatic'.

Code coverage is great in core elements of the app.

I look at the code as a tree of dependency, if the leaves work (e.g. basic UI or code calling a unit tested DAL) and I've tested them when I've developed them, or updated them, there is a large chance they will work, and if there's a bug, then it won't be difficult to find or fix, so the time taken to mock up some tests will probably be time wasted. Yes there is an issue that updates to code they are dependent on may affect them, but again, it's a case by case thing, and unit tests for the code they are dependent on should cover it.

When it comes to the trunks or branch of the code, yes code coverage of functionality (as opposed to each function), is very important.

For example, I recently was on a team that built an app that required a bundle of calculations to calculate carbon emissions. I wrote a suite of tests that tested each and every calculation, and in doing so was happy to see that the dependency injection pattern was working fine.

Inevitably, due to a government act change, we had to add a parameter to the equations, and all 100+ tests broke.

I realised to update them, over and above testing for a typo (which I could test once), I was unit/regression testing mathematics, and ended up spending the time on building another area of the app instead.

johnc
+2  A: 

The way to look at Code Coverage is to see how much is NOT covered and find out why it is not covered. Code coverage simply tells us that the lines of code is being hit when the unit tests are running. It does not tell us that the code works correctly or not. 100% code coverage is a good number but in medium/large projects it is very hard to achieve.

azamsharp
A: 

1) Yes we do measure simple node coverage, beacause:

  • it is easy to do with our current project* (Rails web app)
  • it encourages our developers to write tests (some come from backgrounds where testing was ad-hoc)

2) Code coverage is part of our continuous integration process.

3) The numbers from the reports are used to:

  • enforce a minimum level of coverage (95% otherwise the build fails)
  • find sections of code which should be tested

There are parts of the system where testing is not all that helpful (usually where you need to make use of mock-objects to deal with external systems). But generally having good coverage makes it easier to maintain a project. One knows that fixes or new features do not break existing functionality.

*Details for setting up required coverage for Rails: Min Limit 95 Ahead