I regularly achieve 100% coverage of libraries using TDD, but not always, and there always seem to be parts of applications left over that are untested and uncovered.
Then there are the cases when you start with legacy code that has very few tests and very little coverage.

Please say what your situation is and what has worked that at least improved coverage.
I'm assuming that you are measuring coverage during unit testing, but say if you are using other techniques.

+6  A: 

Delete code.

This isn't snarky, but actually serious. Any time I would see the smallest amount of code duplication or even code that I couldn't get to execute, I deleted it. This increased coverage and increased maintainability.

I should note that this is more applicable to increasing the coverage of old code bases vs. new code bases.

Frank Krueger
I agree, this is a seriously good technique. Gets my vote!
there's nothing more satisfying than deleting 100s of lines of someone else's bad, unused code.
+1  A: 

We use Perl, so Devel::Cover has been very useful for us. Shows per-statement coverage, branch coverage and conditional coverage during unit testing, as well as things like POD coverage. We use HTML output with easy-to-recognize greens for "100%", through yellow and red for lower levels of coverage.

EDIT: To expand on things a little:

  • If conditional coverage isn't complete, examine the conditions for interdependence. If it's there, refactor. If it isn't you should be able to extend your tests to hit all of the conditions.
  • If conditional and branch coverage looks complete but statement coverage isn't, you've either written the conditionals wrong (e.g. always returning early from a sub when you didn't mean to) or you've got extra code that can be safely removed.
Adam Bellaire
What are you saying it is that improved coverage? Is it just that you are using a coverage tool?
Expanded my answer in the edit, I misunderstood the level of detail you were asking for.
Adam Bellaire
+1  A: 

The two things that had the greatest impact on projects I've worked on were:

  1. Periodically "reminding" the development team to actualy implement unit tests, and reviewing how to write effective tests.
  2. Generating a report of overall test coverage, and circulating that among the development managers.
Mark Bessey
+2  A: 

I do assume you read "Code covered vs. Code Tested", right ?

As stated in that question,

Even with 100% block coverage + 100% arc coverage + 100% error-free-for-at-least-one-path straight-line code, there will still be input data that executes paths/loops in ways that exhibit more bugs.

Now, I use eclemma, based on EMMA and that code-coverage tool explains why 100% code is not always possible: because of partially covered lines due to:

  • Implicit branches on the same line.
  • Shared constructor code.
  • Implicit branches due to finally blocks.
  • Implicit branches due to a hidden Class.forName().

So all those 4 cases might be good candidates for refactoring leading to a better code coverage.

Now, I agree with Frank Krueger's answer. Some non-covered code might also be an indication of some refactoring to be done, including some code to actually delete ;)

I understand the difference between code covered and code tested - its far too easy to cheat in a unit test and get 'improved' coverage by just calling other methods and not testing for anything. I agree, eclemma is a great tool and its easy to use.

FIT testing has improved our code coverage. It has been great because it is an entirely different tack.

Background: we have a mix of legacy and new code. We try to unit/integration test the new stuff as much as possible, but because we are migrating to Hibernate/Postgres and away from an OODB, there isn't much point to testing the legacy code.

For those who don't know, FIT is a way to test software from the user perspective. Essentially, you can specify desired behaviour in HTML tables: the tables specify the actions against the software and the desired results. Our team writes 'glue code' (aka FIT test) that map the actions to calls against the code. Note that these tests operate in a view 'from space' compared to unit tests.

Using this approach, we have increased our code-coverage by several percentage points. An added bonus is that these tests will bridge across versions: they will test legacy code but then, later, new code. i.e. they serve as regression tests, in a sense.

Michael Easter
I'd never thought about re-using FIT tests for future versions of an application - I'll keep that in mind. Have you looked into FitNesse ( as a front end/wiki for FIT tests?
We are looking into FitNesse but nothing to report as yet.By the way, I should mention that FIT is a cool idea but the framework itself is quite thin. My team is still struggling with determining exactly how much glue code we should be writing. But it's neat and helped us with code coverage.
Michael Easter