views:

1264

answers:

13

We have a large, multi-platform application written in C. (with a small but growing amount of C++) It has evolved over the years with many features you would expect in a large C/C++ application:

  • #ifdef hell
  • Large files that make it hard to isolate testable code
  • Functions that are too complex to be easily testable

Since this code is targeted for embedded devices, it's a lot of overhead to run it on the actual target. So we would like to do more of our development and testing in quick cycles, on a local system. But we would like to avoid the classic strategy of "copy/paste into a .c file on your system, fix bugs, copy/paste back". If developers are going to to go the trouble to do that, we'd like to be able to recreate the same tests later, and run in an automated fashion.

Here's our problem: in order to refactor the code to be more modular, we need it to be more testable. But in order to introduce automated unit tests, we need it to be more modular.

One problem is that since our files are so large, we might have a function inside a file that calls a function in the same file that we need to stub out to make a good unit test. It seems like this would be less of a problem as our code gets more modular, but that is a long way off.

One thing we thought about doing was tagging "known to be testable" source code with comments. Then we could write a script scan source files for testable code, compile it in a separate file, and link it with the unit tests. We could slowly introduce the unit tests as we fix defects and add more functionality.

However, there is concern that maintaining this scheme (along with all the required stub functions) will become too much of a hassle, and developers will stop maintaining the unit tests. So another approach is to use a tool that automatically generates stubs for all the code, and link the file with that. (the only tool we have found that will do this is an expensive commercial product) But this approach seems to require that all our code be more modular before we can even begin, since only the external calls can be stubbed out.

Personally, I would rather have developers think about their external dependencies and intelligently write their own stubs. But this could be overwhelming to stub out all the dependencies for a horribly overgrown, 10,000 line file. It might be difficult to convince developers that they need to maintain stubs for all their external dependencies, but is that the right way to do it? (One other argument I've heard is that the maintainer of a subsystem should maintain the stubs for their subsystem. But I wonder if "forcing" developers to write their own stubs would lead to better unit testing?)

The #ifdefs, of course, add another whole dimension to the problem.

We have looked at several C/C++ based unit test frameworks, and there are a lot of options that look fine. But we have not found anything to ease the transition from "hairball of code with no unit tests" to "unit-testable code".

So here are my questions to anyone else who has been through this:

  • What is a good starting point? Are we going in the right direction, or are we missing something obvious?
  • What tools might be useful to help with the transition? (preferably free/open source, since our budget right now is roughly "zero")

Note, our build environment is Linux/UNIX based, so we can't use any Windows-only tools.

+1  A: 

Its much easier to make it more modular first. You can't really unittest something with a whole lot of dependencies. When to refactor is a tricky calculation. You really have to weigh the costs and risks vs the benefits. Is this code something that will be reused extensively? Or is this code really not going to change. If you plan to continue to get use out of it, then you probably want to refactor.

Sounds like though, you want to refactor. You need to start by breaking out the simplest utilities and build on them. You have your C module that does a gazillion things. Maybe, for example, there's some code in there that is always formatting strings a certain way. Maybe this can be brought out to be a stand-alone utility module. You've got your new string formatting module, you've made the code more readable. Its already an improvement. You are asserting that you are in a catch 22 situation. You really aren't. Just by moving things around, you've made the code more readable and maintainable.

Now you can create a unittest for this broken out module. You can do that a couple of ways. You can make a separate app that just includes your code and runs a bunch of cases in a main routine on your PC or maybe define a static function called "UnitTest" that will execute all the test cases and return "1" if they pass. This could be run on the target.

Maybe you can't go 100% with this approach, but its a start, and it may make you see other things that can be easily broken out into testable utilities.

Doug T.
+3  A: 

One approach to consider is to first put a system-wide simulation framework in place that you could use to develop integration tests. Starting with integration tests might seem counter-intuitive, but the problems in doing true unit-testing in the environment you describe are quite formidable. Probably moreso than just simulating the entire run-time in software...

This approach would simply bypass your listed issues -- although it would give you many different ones. In practice though, I've found that with a robust integration testing framework you can develop tests that exercise functionality at the unit level, although without unit isolation.

PS: Consider writing a command-driven simulation framework, maybe built on Python or Tcl. This will let you script tests quite easily...

Jeff Kotula
Very good advice! With good integration tests in place, he can start refactoring the code to be more modular and more unit-testable. Without any tests at all, it would be far too risky to start refactoring into more unit-testable code.
JacquesB
Thanks for the answer. A whole-system simulation would be great. This is one possibility, but it's a huge amount of work.Some modules are already separated out enough so that the developers can run in their own simulation environment, but right now I would say that this is the exception and not the rule.
Mike
+2  A: 

G'day,

I'd start by having a look at any obvious points, e.g. using dec's in header files for one.

Then start looking at how the code has been laid out. Is it logical? Maybe start breaking large files down into smaller ones.

Maybe grab a copy of Jon Lakos's excellent book "Large-Scale C++ Software Design" (sanitised Amazon link) to get some ideas on how it should be laid out.

Once you start getting a bit more faith in the code base itself, i.e. code layout as in file layout, and have cleared up some of the bad smells, e.g. using dec's in header files, then you can start picking out some functionality that you can use to start writing your unit tests.

Pick a good platform, I like CUnit and CPPUnit, and go from there.

It's going to be a long, slow journey though.

HTH

cheers,

Rob Wells
+17  A: 

"we have not found anything to ease the transition from "hairball of code with no unit tests" to 'unit-testable code'."

How sad -- no miraculous solution -- just a lot of hard work correcting years of accumulated technical debt.

There is no easy transition. You have a large, complex, serious problem.

You can only solve it in tiny steps. Each tiny step involves the following.

  1. Pick a discrete piece of code that's absolutely essential. (Don't nibble around the edges at junk.) Pick a component that's important and -- somehow -- can be carved out of the rest. While a single function is ideal, it might be a tangled cluster of functions or maybe a whole file of functions. It's okay to start with something less than perfect for your testable components.

  2. Figure out what it's supposed to do. Figure out what it's interface is supposed to be. To do this, you may have to do some initial refactoring to make your target piece actually discrete.

  3. Write an "overall" integration test that -- for now -- tests your discrete piece of code more-or-less as it was found. Get this to pass before you try and change anything significant.

  4. Refactor the code into tidy, testable units that make better sense than your current hairball. You're going to have to maintain some backward compatibility (for now) with your overall integration test.

  5. Write unit tests for the new units.

  6. Once it all passes, decommission the old API and fix what will be broken by the change. If necessary, rework the original integration test; it tests the old API, you want to test the new API.

Iterate.

S.Lott
This answer sounds great. But it assumes that we have time in the schedule to do this. ;)We may end up doing basically this, but not following your advice exactly. That is, we might need to "nibble around the edges at junk". The junk being whatever code has bugs, or whatever code we are introducing, or whatever code we have to touch in order to introduce new code. We probably do not have the luxury to pick a gigantic, core function to start with.
Mike
If you don't start with a core function, testing is optional. Managers will decide that testing isn't required and will abandon it. If you start with something core, testing becomes essential.
S.Lott
I agree with you, and I wish it were that simple. Unfortunately, wall street only looks at the next quarter. So middle management wants testing to be essential, but upper management wants more features as fast as possible! They want quality, scope, and schedule but they are not willing to take a scope or schedule hit. So we need to strike a balance.
Mike
@Mike: It is very simple. The management conflict will -- inevitably -- doom the effort unless you dig in to something that matters. The situation is bad, but not complex. Save your emails and chuckle knowingly when the testing for a module is cancelled or overruled.
S.Lott
+17  A: 

Michael Feathers wrote the bible on this, Working Effectively with Legacy Code

George V. Reilly
yes, this is a realy good book for this sort of thing (it's a bit painful though)
Ray Tayek
A short version of the book: http://www.objectmentor.com/resources/articles/WorkingEffectivelyWithLegacyCode.pdf
Esko Luontola
+2  A: 

My little experience with legacy code and introducing testing would be to create the "Characterization tests". You start creating tests with known input and then get the output. These tests are useful for methods/classes that you don't know what they really do, but you know that they are working.

However, there are sometimes when it's nearly impossible to create unit tests (even characterization tests). On that case I attack the problem through acceptance tests (Fitnesse in this case).

You create the whole bunch of classes needed to test one feature and check it on fitnesse. It's similar to "characterization tests" but it's one level higher.

Edison Gustavo Muenz
+3  A: 

I have worked on both Green field project with fully unit tested code bases and large C++ applications that have grown over many years and with many different developers on them.

Honestly, I would not bother an attempt to get a legacy code base to the state where units tests and test first development can add a lot of value.

Once a legacy code base gets to a certain size and complexity getting it to the point where unit test coverage provides you with a lot of benefits becomes a task equivalent to a full rewrite.

The main problem is that as soon as you start refactoring for testability you will begin introducing bugs. And only once you get high test coverage can you expect all those new bugs to be found and fixed.

That means that you either go very slowly and carefully and you do not get the benefits of a well unit tested code base until years from now. (probably never since mergers etc happen.) In the mean time you are probably introducing some new bugs with no apparent value to the end user of the software.

Or you go fast but have an unstable code base until all you have reached high test coverage of all your code. (So you end up with 2 branches, one in production, one for the unit-tested version.)

Of cause this all a matter of scale for some projects a rewrite might just take a few weeks and can certainly be worth it.

James Dean
A: 

You cannot.

Your code will be sufficiently non-modular that introducing unit testing is impossible.

The only way you could introduce unit testing is to refactor your code into units.

You cannot refactor your code, because your code is too non-modular to permit that to occur.

My basic advice in this situation is : don't get in this situation.

Blank Xavier
+2  A: 

As George said Working Effectively with Legacy Code is the bible for this kind of thing.

However the only way others in your team will buy in is if they see the benefit to them personally of keeping the tests working.

To achieve this you require a test framework with which is as easy as possible to use. Plan for other developers you take your tests as examples to write their own. If they do not have unit testing experience, don't expect them to spend time learning a framework, they will probably see writing unit testings as slowing their development so not knowing the framework is an excuse to skip the tests.

Spend some time on continuous integration using cruise control, luntbuild, cdash etc. If your code is automatically compiled every night and tests run then developers will start to see the benefits if unit tests catch bugs before qa.

One thing to encourage is shared code ownership. If a developer changes their code and breaks someone else's test they should not expect that person to fix their test, they should investigate why the test is not working and fix it themselves. In my experience this is one of the hardest things to achieve.

Most developers write some form of unit test, some times a small piece of throw-away code they don't check in or integrate the build. Make integrating these into the build easy and developers will start to buy in.

My approach is to add tests for new and as code is modified, sometimes you cannot add as many or as detailed tests as you would like without decoupling too much existing code, err on the side of the practical.

The only place i insist on unit tests is on platform specific code. Where #ifdefs are replaces with platform specific higher level functions/classes, these must be tested on all platforms with the same tests. This saves loads of time adding new platforms.

We use boost::test to structure our test, the simple self registering functions make writing tests easy.

These are wrapped in CTest (part of CMake) this runs a group of unit tests executables at once and generates a simple report.

Our nightly build is automated with ant and luntbuild (ant glues c++, .net and java builds)

Soon I hope to add automated deployment and functional tests to the build.

iain
+1  A: 

I think, basically you have two of separate Problems:

  1. Large Code base to refactor
  2. Work with a team

Modularization, refactoring, inserting Unit tests and alike is a difficult task, and i doubt that any tool could take over larger parts of that work. Its a rare skill. Some Programmers can do that very well. Most hate it.

Doing such a task with a team is tedious. I strongly doubt that '"forcing" developers' ever will work. Iains thoughts are very well, but i would consider finding one or two programmers who are able to and who want to "clean up" the sources: Refactor, Modualrize, introduce Unit Tests etc. Let these people do the job and the others introduce new bugs, aehm funtions. Only people who like that kind of work will succeed with that job.

RED SOFT ADAIR
+1  A: 

Make using tests easy.

I'd start with putting the "runs automatically" into place. If you want developers (including yourself) to write tests, make it easy to run them, and see the results.

Writing a test of three lines, running it against the latest build and seeing the results should be only one click away, and not send the developer to the coffe machine.

This means you need a latest build, you may need to change policies how people work on code etc. I know that such a process can be a PITA with embedded devices, and I can't give any advice with that. But I know that if running the tests is hard, noone will write them.

Test what can be tested

I know I run against common Unit Test philosophy here, but that's what I do: Write tests for the things that are easy to test. I don't bother with mocking, I don't refactor to make it testable, and if there is UI involved i don't have a unit test. But more and more of my library routines have one.

I am quite amazed what simple tests tend to find. Picking the low hanging fruits is by no means useless.

Looking at it in another way: You wouldn't plan to maintain that giant hairball mess if it wasn't a successful product. You current quality control isn't a total failure that needs to be replaced. Rather, use unit tests where they are easy to do.

(You need to get it done, though. Don't get trapped into "fixing everything" around your build process.)

Teach how to improve your code base

Any code base with that history screams for improvements, that's for sure. You will never refactor all of it, though.

Looking at two pieces of code with the same functionality, most people can agree which one is "better" under a given aspect (performance, readability, maintainability, testability, ...). The hard parts are three:

  • how to balance the different aspects
  • how to agree that this piece of code is good enough
  • how to turn bad code into good enough code without breaking anything.

The first point is probably the hardest, and as much a social as an engineering question. But the other points can be learned. I don't know any formal courses that take this approach, but maybe you can organize something in-house: anything from two guys worting together to "workshops" where you take a nasty piece of code and discuss how to improve it.


peterchen
+1  A: 
A: 

By rewriting in a non-legacy platform. :-)

Jeff