views:

97

answers:

2

I'm using gcov to measure coverage in my C++ code. I'd like to get to 100% coverage, but am hampered by the fact that there are some lines of code that are theoretically un-hittable (methods that are required to be implemented but which are never called, default branches of switch statements, etc.). Each of these branches contains an assert( false ); statement, but gcov still marks them as un-hit.

I'd like to be able to tell gcov to ignore these branches. Is there any way to give gcov that information -- by annotating the source code, or by any other mechanism?

A: 

I do not believe this is possible. Gcov depends on gcc to generate extra code to produce the coverage output. GCov itself just parses the data. This means that Gcov cannot analyze the code any better than gcc (and I assume you use -Wall and have removed code reported as unreachable).

Remember that relocatable functions can be called from anywhere, potentially even external dlls or executables so there is no way the compiler can know what relocatable functions will not be called or what input these functions may have.

You probably will need to use some facy static analysis tool to get the info that you want.

doron
+1  A: 

Could you introduce unit tests of the relevant functions, that exist solely to shut gcov up by directly attacking the theoretically-unhittable code paths? Since they're unit tests, they could perhaps ignore the "impossibility" of the situations. They could call the functions that are never called, pass invalid enum values to catch default branches, etc.

Then either run those tests only on the version of your code compiled with NDEBUG, or else run them in a harness which tests that the assert is triggered - whatever your test framework supports.

I find it a bit odd though for the spec to say that the code has to be there, rather than the spec containing functional requirements on the code. In particular, it means that your tests aren't testing those requirements, which is as good a reason as any to keep requirements functional. Personally I'd want to modify the spec to say, "if called with an invalid enum value, the function shall fail an assert. Callers shall not call the function with an invalid enum value in release mode". Or some such.

Presumably what it currently says, is along the lines of "all switch statements must have a default case". But that means coding standards are interfering with observable behaviour (at least, observable under gcov) by introducing dead code. Coding standards shouldn't do that, so the functional spec should take account of the coding standards if possible.

Failing that, you could perhaps wrap the unhittable code in #if !GCOV_BUILD, and do a separate build for gcov's benefit. This build will fail some requirements, but conditional on your analysis of the code being correct, it gives you the confidence you want that the test suite tests everything else.

Edit: you say you're using a dodgy code generator, but you're also asking for a solution by annotating the source code. If you're changing the source, can you just remove the dead code in many cases? Not that changing generated source is ideal, but needs must...

Steve Jessop
It's not that the "spec" says the functions have to be there. We have a code generator that generates prototypes for the functions, even though they're not used. (Fixing the code generator would be a better option, but unfortunately that's not under my control.) Another situation where this sometimes crops up is where you're implementing an interface (i.e. deriving from a class with pure virtual functions) but are only using part of that interface.
jchl
Having a unit test call the functions directly isn't a bad idea, though having to run the tests on an NDEBUG build would be painful (currently our unit tests all run on debug builds). That sounds like more work than it's worth. I could just get rid of the asserts, though I like them there for documentation purposes. I could replace them with throwing a special exception that's never caught except during the unit test.... that's not a bad idea.
jchl
@jchl: "you're implementing an interface (i.e. deriving from a class with pure virtual functions) but are only using part of that interface." - sort of. If I was writing comprehensive tests for the class, though, I'd still have the class define what they do, and call the unused functions from the tests to make sure they do it. And if I wasn't writing comprehensive tests, I wouldn't care whether I had code coverage or not ;-)
Steve Jessop