Converting my current code project to TDD, I've noticed something.
class Foo {
public event EventHandler Test;
public void SomeFunction() {
//snip...
Test(this, new EventArgs());
}
}
There are two dangers I can see when testing this code and relying on a code coverage tool to determine if you have enough tests.
- You should be testing if the
Test
event gets fired. Code coverage tools alone won't tell you if you forget this. - I'll get to the other in a second.
To this end, I added an event handler to my startup function so that it looked like this:
Foo test;
int eventCount;
[Startup] public void Init() {
test = new Foo();
// snip...
eventCount = 0;
test.Test += MyHandler;
}
void MyHandler(object sender, EventArgs e) { eventCount++; }
Now I can simply check eventCount
to see how many times my event was called, if it was called. Pretty neat. Only now we've let through an insidious little bug that will never be caught by any test: namely, SomeFunction()
doesn't check if the event has any handlers before trying to call it. This will cause a null dereference, which will never be caught by any of our tests because they all have an event handler attached by default. But again, a code coverage tool will still report full coverage.
This is just my "real world example" at hand, but it occurs to me that plenty more of these sorts of errors can slip through, even with 100% 'coverage' of your code, this still doesn't translate to 100% tested. Should we take the coverage reported by such a tool with a grain of salt when writing tests? Are there other sorts of tools that would catch these holes?