People at my company see unit testing as a lot of extra work, that offers fewer benefits than existing functional tests. Are unit and integration tests worth it? Note a large existing codebase that wasnt designed with testing in mind.
views:
405answers:
9It depends on whether your functional tests are automated or done manually. If it's the latter, then any kind of automated test suite is useful since the cost of running those unit / integration tests is far lower than running manual functional tests. You can show real ROI there. I would recommend starting with writing some integration tests and if time / budget allows in the future, take a look at unit testing then.
Retroactively writing unit tests for legacy code can very often NOT be worth it. Stick with functional tests, and automate them.
Then what we've done is have the guideline that any bug fixes (or new features) must be accompanied by unit tests at least testing the fix. That way you get the project at least going in the right direction.
And I have to agree with Jon Skeet (how could I not?) in recommending "Working Effectively With Legacy Code", it really was a helpful skim/read.
(I'm assuming that you're using "functional test" to mean a test involving the whole system or application being up and running.)
I would unit test new functionality as I wrote it, for three reasons:
- It helps me get to working code quicker. The turnaround time for "unit test failed, fix code, unit test passed" is generally a lot shorter than "functional test failed, fix code, functional test passed".
- It helps me to design my code in a cleaner way
- It helps me understand my code and what it's meant to be doing when I come to maintain it. If I make a change, it will give me more confidence that I haven't broken anything.
(This includes bug fixes, as suggested by Epaga.)
I would strongly recommend Michael Feathers' "Working Effectively with Legacy Code" to give you tips on how to start unit testing a codebase which wasn't designed for it.
Yes they are worth it, I am now faster at coding since I started unit testing my code. I spend less time fixing bugs and more time thinking about what my code should do.
Most people are unaware what automated unit tests are for:
- To experiment with a new technology
- To document how to use a part of the code
- To make sure a dead bug stays dead
- To allow you to refactor the code
- To allow you to change any major parts of the code
- Create a lower watermark below which the quality of your product cannot possibly drop
- Increase development speed because now, you know that something works (instead of hoping it does until a customer reports a bug).
So if any of these reasons bring you a benefit, automated unit tests are for you. If not, then don't waste your time.
One application I was bought in to consult on the FAT (test)ing of consisted of a 21,000 lines switch statement. Most units of functionality were a few dozen to a couple of hundred lines in a case statement. The application was built in several variants, so there were many #ifdef sections of the switch.
It was not designed for unit testing - it was not factored at all.
( It was designed in the sense there was a definite, easy to comprehend architecture - malloc a struct, send the main loop a user message with the pointer to the struct as the lparam and then free it when the message is processed. But form did not follow function, which is the central tenet of good design. )
To add unit testing to new functionality would mean a major break with the pattern; either you would need to put your code somewhere other than the big switch, and double the complexity of the variant selection mechanism, or make a large amount of scaffolding to put the messages in the queue to trigger the new functionality.
So though it's certainly desirable to unit test new functionality, it's not always practical if a system isn't already well factored. Either there's a significant amount of work to refactor the system to allow unit testing, or you end up bench-testing the code and cut and pasting it into the existing framework - and a copy of unit tested code isn't unit tested code.
As it happens, I read a paper last night on this very subject. The authors compare projects within four groups at Microsoft and IBM, contrasting, in hindsight, projects which used both unit testing and functional testing and projects which used functional testing alone. To quote the authors:
"The results of the case studies indicate that the preview release defect density of the four products decreased between 40% and 90% relative to similar projects that did not use the TDD practice. Subjectively, the teams experienced a 15 to 35% increase in initial development time after adopting TDD."
This indicates that it is certainly worth doing unit testing when you add new functionality to your project.
You test when you want to know something about something. If you know that your product (system, unit, service, component...) is going to work, then there's no need to test it. If you're uncertain as to whether it will work, you probably have some questions about it. Whether those questions are worth answering is a matter of risk and priorities.
If you're sure that your product will work, and you don't have any questions about it, there is still one question that's worth asking: why don't I have any questions?
---Michael B.
Unit testing indeed is extra work but it pays off in the long run. Here are the advantages over integration testing :
- you get a regression suite that acts as a safety net in case of refactoring - the same can be said of integration tests, although it can be tough to say if the test covers a piece of code.
- unit tests give an immediate feedback when modifying the code - and this feedback can be very accurate, pointing to the method where the anomaly is.
- those tests are cheap to run : they run very fast (a few seconds typically), without any installation or deployment, just compile and test. So they can be run often.
- it is easy to add a new test to reproduce a problem once it is identified, and it augments the regression suite, or to answer a question ("what happen if this function is not called with a null parameter ...").
There clearly is some overlap between the two, but they are complementary as they both offer advantages.
Now, like any software engineering process, testing has to be taylored according to the project needs.
With a large legacy codebase, legacy in the sense of not unit tested, I would recommend to restrict unit tests to new features added to the code as unit tests can be hard to introduce. In this regard, I can only second (third ?) the recommendation of the "Working Effectively with legacy code" book to help bringing unit testing in an existing codebase.