views:

150

answers:

3

Wherever possible I use TDD:

  • I mock out my interfaces
  • I use IOC so my mocked ojbects can be injected
  • I ensure my tests run and that the coverage increases and I am happy.

then...

  • I create derived classes that actually do stuff, like going to a database, or writing to a message queue etc.

This is where code coverage decreases - and I feel sad.

But then, I liberally spread [CoverageExclude] over these concrete classes and coverage goes up again.

But then instead of feeling sad, I feel dirty. I somehow feel like I'm cheating even though it's not possible to unit-test the concrete classes.

I'm interested in hearing how your projects are organised, i.e. how do you physically arrange code that can be tested against code that can't be tested.

I'm thinking that perhaps a nice solution would be to separate out untestable concrete types into their own assembly and then ban the use of [CoverageExclude] in the assemblies that do contain testable code. This'd also make it easier to create an NDepend rule to fail the build when this attribute is incorrectly found in the testable assemblies.


Edit: the essense of this question touches on the fact that you can test the things that USE your mocked interfaces but you can't (or shouldn't!) UNIT-test the objects that ARE the real implementations of those interfaces. Here's an example:

public void ApplyPatchAndReboot( )
{ 
    _patcher.ApplyPatch( ) ;
    _rebooter.Reboot( ) ;
}

patcher and rebooter are injected in the constructor:

public SystemUpdater(IApplyPatches patcher, IRebootTheSystem rebooter)...

The unit test looks like:

public void should_reboot_the_system( )
{
    ... new SystemUpdater(mockedPatcher, mockedRebooter);
    update.ApplyPatchAndReboot( );
}

This works fine - my UNIT-TEST coverage is 100%. I now write:

public class ReallyRebootTheSystemForReal : IRebootTheSystem
{
    ... call some API to really (REALLY!) reboot
}

My UNIT-TEST coverage goes down and there's no way to UNIT-TEST the new class. Sure, I'll add a functional test and run it when I've got 20 minutes to spare(!).

So, I suppose my question boils down to the fact that it's nice to have near 100% UNIT-TEST coverage. Said another way, it's nice to be able to unit-test near 100% of the behaviour of the system. In the above example, the BEHAVIOUR of the patcher should reboot the machine. This we can verify for sure. The ReallyRebootTheSytemForReal type isn't strictly just behaviour - it has side effects which means it can't be unit-tested. Since it can't be unit-test it affects the test-coverage percentage. So,

  • Does it matter that these things reduce unit-test coverage percantage?
  • Should they be segregated into their own assemblies where people expect 0% UNIT-TEST coverage?
  • Should concrete types like this be so small (in Cyclomatic Complexity) that a unit-test (or otherwise) is superfluous


+1  A: 

I don't understand how your concrete classes are untestable. This smells horrible to me.

If you have a concrete class that is writing to a message queue, you should be able to pass it a mock queue, and test all it's methods just fine. If your class is going to a database, then you should be able to hand it a mock database to go to.

There can be situations that might lead to untestable code, I won't deny that - but that should be the exception, not the rule. All your concrete class worker objects? Something isn't right.

womp
If your Resource Access Component talks to a Legacy System or public web service, it's pretty hard to test the actual implementation that talks to such systems.
Mark Seemann
womp, some of the concrete classes are untestable because they touch things like databases, message queues etc. You say that if I have a concrete class that writes to a message queue then I can pass it a mock queue. That's quite correct, but ultimately, there will be a peice of software that physically writes to the queue: the ultimate concrete implementation that physically touches a queue. This is what's untestable even though everything that USES it (or more accurately, its interface) HAS been tested.
Steve Dunn
If you inject your queue, then your class is doing nothing different to the mock than it is to the real queue. It is physically touching a queue, it is physically writing to the queue. You can test that. There isn't a line of code there that can't be tested.
womp
Sure, I can test things that USE the queue writing interface, and I do - these are covered by unit tests. Then I write the code that IS the queue writing component. It initialises the queue and writes to it. This can't be unit tested in the normal sense of unit testing (don't touch external systems etc.) When you say there isn't a line of code that can't be tested: this is true, but not true for 'unit-testing'. Try writing the test 'should_reboot_machine'! :)
Steve Dunn
There's no clear demarkation of what a 'unit test' really is. A Data Access Component (DAC) and the database that it talks to can be viewed as a single unit. You can still test that. Whether you insist on calling such an automated test a 'unit test' or an 'integration test' is not important. The important part is whether there's value in writing such tests. For DACs, there often is. On the other hand, there certainly are things you can't test, like the reboot example you just gave. As long as we implement these as Humble Objects, we should be good.
Mark Seemann
+2  A: 

You are on the right track. Some of the concrete implementations you probably can test, such as Data Access Components. Automated testing against a relational database is most certainly possible, but should also be factored out into its own library (with a corresponding unit test library).

Since you are already using Dependency Injection, it should be a piece of cake for you compose such a dependency back into your real application.

On the other hand, there will also be concrete dependencies that are essentially un-testable (or de-testable, as Fowler once joked). Such implementations should be kept as thin as possible. Often, it is possible to design the API that such a Dependency exposes in such a way that all the logic happens in the consumer, and the complexity of the real implementation is very low.

Implementing such concrete Dependencies is an explicit design decision, and when you make that decision, you simultaneously decide that such a library should not be unit tested, and thus code coverage should not be measured.

Such a library is called a Humble Object. It (and many other patterns) are described in the excellent xUnit Test Patterns.

As a rule of thumb I accept that code is untested if it has a Cyclomatic Complexity of 1. In that case, it's more or less purely declarative. Pragmatically, untestable components are in order as long as they have low Cyclomatic Complexity. How low 'low' is you must decide for yourself.

In any case, [CoverageExclude] seems like a smell to me (I didn't even know it existed before I read your question).

Mark Seemann
+1: excellent reading suggestion
Roberto Liffredo
Thanks for the comment Mark. I'll certainly read the test patterns book. I'll also take a look at the CC; I suspect it's very low. Just to be clear, when I used the word 'test', I meant purely 'unit-test', i.e. testing the behaviour of different types. The CoverageExclude attribute is recognised by NCover.
Steve Dunn
+1  A: 

To expand on womps answer: I suspect you are considering more to be "untestable" than really is. Untestable in the strict "one unit at a time" unit testing without testing any of the dependencies simultanously? Sure. But it should be easily achievable with slower and more infrequently running integration style tests.

You mention accessing database and writing messages to queues. As womp mentions, you can feed them mock databases and mock queues during unit testing and the testing the actual concrete beahviour in integration tests. Personally I don't see anything wrong with testing concrete implementations directly as unit tests either, at least when they are not remote (or legacy). Sure they run a bit slower, but hey, at least they get covered by automated tests.

Would you put a system into production where messages are written to queues and not having actually tested that the messages are written to the actual physical/logical queue? I wouldn't.

Knut Haugen