views:

894

answers:

6

I've just started writing unit tests for a legacy code module with large physical dependencies using the #include directive. I've been dealing with them a few ways that felt overly tedious (providing empty headers to break long #include dependency lists, and using #define to prevent classes from being compiled) and was looking for some better strategies for handling these problems.

I've been frequently running into the problem of duplicating almost every header file with a blank version in order to separate the class I'm testing in it's entirety, and then writing substantial stub/mock/fake code for objects that will need to be replaced since they're now undefined.

Anyone know some better practices?

+1  A: 

Since you're testing legacy code I'm assuming you can't refactor said code to have less dependencies (e.g. by using the pimpl idiom)

That leaves you with little options I'm afraid. Every header that was included for a type or function will need a mock object for that type or function for everything to compile, there's little you can do...

Pieter
A: 

If you keep writing stubs/mock/fake codes you risk doing unit testing on a class that has different behavior then when compiled on the main project.

But if those includes are there and have no added behavior then it's Ok.

I'd try not changing anything on the includes while doing the unit testing so you're sure (as far you can be on legacy code :) ) that you testing the real code.

+1  A: 

I am not answering your question directly but I am afraid that unit testing just may not be the thing to do if you work with large amounts of legacy code.

After leading an XP team on a green field development project I really loved my Unit tests. Things happened and a few years later I find myself working on a large legacy code base that has lots of quality problems.

I tried to find a way to add units tests to the application but in the end just got stuck in a catch-22:

  1. In order to write meaning full unit tests the code would need to be refactored.
  2. Without unit tests it will be too dangerous to refactor the code.

If you feel like a hero and drink the cool-aid on unit testing then you may still give it a try but there is a real risk that you end up with just more test code of little value that now also needs to be maintained.

Sometimes it is just best to work on the code in the way that is "designed" to be worked on.

James Dean
A: 

You're definitely between a rock and a hard place with legacy code with large dependencies. You've got a long hard slog ahead to sort it all out.

From what you say, it seems you are trying to keep the source code intact for each module in turn, placing it in a test harness with external dependencies mocked out. My suggestion here would be to take the even braver step of attempting some refactoring to eliminate (or invert) the dependencies, which is probably the very step you are trying to avoid.

I suggest this because I'm guessing the dependencies are going to kill you as you write tests. You will certainly be better off in the long term if you can eliminate the dependencies.

quamrana
+1  A: 

I don't know if this will work for your project but you might try to attack the problem from the link phase of your build.

This would completely eliminate your #include problem. All you would need to do is re-implement the interfaces in the included files to do what ever you want and then just link to the mock object files that you have created to implement the interfaces in the include file.

The big disadvantage to this method is a more complected build system.

witkamp
+6  A: 

The depression in the responses is overwhelming... But don't fear, we've got the holy book to exorcise the demons of legacy C++ code. Seriously just buy the book if you are in line for more than a week of jousting with legacy C++ code.

Turn to page 127: The case of the horrible include dependencies. (Now I am not even within miles of Michael Feathers but here as-short-as-I-could-manage answer..)

Problem: In C++ if a classA needs to know about ClassB, Class B's declaration is straight-lifted / textually included in the ClassA's source file. And since we programmers love to take it to the wrong extreme, a file can recursively include a zillion others transitively. Builds take years.. but hey atleast it builds.. we can wait.

Now to say 'instantiating ClassA under a test harness is difficult' is an understatement. (Quoting MF's example - Scheduler is our poster problem child with deps galore.)

#include "TestHarness.h"
#include "Scheduler.h"
TEST(create, Scheduler)  // your fave C++ test framework macro
{
  Scheduler scheduler("fred");
}

This will bring out the includes dragon with a flurry of build errors.
Blow#1 Patience-n-Persistence: Take on each include one at a time and decide if we really need that dependency. Let's assume SchedulerDisplay is one of them, whose displayEntry method is called in Scheduler's ctor.
Blow#2 Fake-it-till-you-make-it (Thanks RonJ):

#include "TestHarness.h"
#include "Scheduler.h"
void SchedulerDisplay::displayEntry(const string& entryDescription) {}
TEST(create, Scheduler)
{
  Scheduler scheduler("fred");
}

And pop goes the dependency and all its transitive includes. You can also reuse the Fake methods by encapsulating it in a Fakes.h file to be included in your test files.
Blow#3 Practice: It may not be always that simple.. but you get the idea. After the first few duels, the process of breaking deps will get easy-n-mechanical

Caveats (Did I mention there are caveats? :)

  • We need a separate build for test cases in this file ; we can have only 1 definition for the SchedulerDisplay::displayEntry method in a program. So create a separate program for scheduler tests.
  • We aren't breaking any dependencies in the program, so we are not making the code cleaner.
  • You need to maintain those fakes as long as we need the tests.
  • Your sense of aesthetics may be offended for a while.. just bite your lip and 'bear with us for a better tomorrow'

Use this technique for a very huge class with severe dependency issues. Don't use often or lightly.. Use this as a starting point for deeper refactorings. Over time this testing program can be taken behind the barn as you extract more classes (WITH their own tests).

For more.. please do read the book. Invaluable. Fight on bro!

Gishu
While I find this to be an acceptable answer, i feel it really glosses over the process between providing fake stubs in the alternate function implementation and the magic that has to preformed during the build process.
MasD