tags:

views:

378

answers:

9

Most of the discussion on this site is very positive about unit testing. I'm a fan of unit testing myself. However, I've found extensive unit testing brings its own challenges. For example, unit tests are often closely coupled to the code they test, which can make API changes increasingly costly as the volume of tests grows.

Have you found real-world situations where unit tests have been detrimental to code quality or time to delivery? How have you dealt with these situations? Are there any 'best practices' which can be applied to the design and implementation of unit tests?

There is a somewhat related question here: Why didn't unit testing work out for your project?

+1  A: 

An excess of false positives can slow development down, so it's important to test for what you actually want to remain invariant. This usually means writing unit tests in advance for requirements, then following up with more detailed unit tests to detect unexpected shifts in output.

Steven Sudit
+1  A: 

Mostly in cases where the system was developed without unit testing in mind, it was an afterthought and not a design tool. When you develop with automated tests the chances of breaking your API diminishes.

Otávio Décio
Are there any good designs that would have problems with unit tests?
txwikinger
A: 

If you're sure your code won't be reused, won't need to be mantained, your project is simple and very short term; then you shouldn't need unit tests.

Unit tests are useful to facilitate changes and maintenance. They really add a little in time to delivery, but it is paid in the medium / long term. If there is no medium / long term, it may be unnecessary, being the manual tests enough.

But all of this is very unlikely, though. So they are still a trend :)

Also, sometimes might be a necessary business decision to invest less time in testing, in order to have a faster urgent delivery (which will need to be paid with interest later)

Samuel Carrijo
Unit tests improve the code even if the code isn't going to be changed in the future.
dss539
@dss539 But if the code won't be modified, you no longer need it to be readable, well-designed, etc.
Samuel Carrijo
no you need it to be readable and well designed to ensure that it ever worked properly in the first place. unit tests strongly aid in design and may catch a few bugs, too. just because a piece of code is never going to be changed doesn't mean it can be crap. unit testing helps minimize the chance that some given code is crap
dss539
+4  A: 

With extensive unit testing you will start to find that refactoring operations are more expensive for exactly the reasons you said.

IMHO this is a good thing. Expensive and big changes to an API should have a bigger cost relative to small and cheap changes. Refactoring is not a free operation and it's important to understand the impact to both yourself and consumers of your API. Unit Tests are great ruler for measuring how expensive an API change will be to consume.

Part of this problem though is relieved by tooling. Most IDEs directly or indirectly (via plugins) support refactoring operations in their code base. Using these operations to change your unit tests will relieve a bit of the pain.

JaredPar
Doesn't this rather mean bad unit test design?
txwikinger
@txwikinger, I don't see how it could be bad design. If you do API level unit testing and the API changes, it will result in corresponding changes in the unit tests.
JaredPar
Well.. unit tests check against the specification. If that changes, the unit test needs to change too. I would probably think it is good that way, since API changes likely cause breakage somewhere else.
txwikinger
It depends on what you define as API. It certainly possible to have unit tests that are too fine-grained and test implementation details that should be able to change without any problems.
Michael Borgwardt
A: 

Yes there are situations where unit testing can be detrimental to code quality and delivery time. If you create too many unit test your code will become mangled with interfaces and your code quality as a whole will suffer. Abstraction is great but you can have too much of it.

If your writing unit tests for a prototype or a system that has a high chance of having major changes your unit test will have an effect on time to delivery. In these cases it's often better to write acceptance test which test closer to end to end.

gradbot
Why would you have to create so many interfaces? There's more than one way to inject dependencies... or are you talking about something else?
dss539
Hmm, my opinion may just come from having had to add unit testing to existing projects. I'd love to use TDD from the start on something.
gradbot
Yes, there are non-intrusive ways to inject dependencies, but some people are obsessed with decoupling and abstracting everything excessively, often in the name of testability.
Michael Borgwardt
A: 

Slow unit tests can often be detrimental to development. This usually happens when unit tests become integration tests that need to hit web services or the database. If your suite of unit tests takes over an hour to run, often times you'll find yourself and your team essentially paralyzed for that hour waiting to see if the unit tests pass or not (since you don't want to keep building upon a broken foundation).

With that being said, I think the benefits far outweigh the drawbacks in all but the most contrived cases.

Kevin Pang
+2  A: 

One of the projects I worked on was heavily unit-tested; we had over 1000 unit tests for 20 or so classes. There was slightly more test code than production code. The unit tests caught innumerable errors introduced during refactoring operations; they definitely made it easy and safe to make changes, extend functionality etc. The released code had a very low bug rate.

To encourage ourselves to write the unit tests, we specifically chose to keep them 'quick and dirty' - we would bash out a test as we produced the project code, and the tests were boring and 'not real code', so as soon as we wrote one that exercised the functionality of the production code, we were done, and moved on. The only criteria for the test code was that it fully exercised the API of the production code.

What we learnt the hard way is that this approach does not scale. As the code evolved, we saw a need to change the communication pattern between our objects, and suddenly I had 600 failing unit tests! Fixing this took me several days. This level of test breakage happened two or three times with further major architecture refactorings. In each case I don't believe we could reasonably have foreseen the code evolution that was required beforehand.

The moral of the story for me was this: unit-testing code needs to be just as clean as production code. You simply can't get away with cuttting and pasting in unit tests. You need to apply sensible refactoring, and decouple your tests from the production code where possible by using proxy objects.

Of course all of this adds some complexity and cost to your unit tests (and can introduce bugs to your tests!), so it's a fine balance. But I do believe that the concept of 'unit tests', taken in isolation, is not the clear and unambiguous win it's often made out to be. My experience is that unit tests, like everything else in programming, require care, and are not a methodology that can be applied blindly. It's therefore surprising to me that I've not seen more discussion of this topic on forums like this one and in the literature.

ire_and_curses
"that this approach does not scale" Wrong conclusion. The correct conclusion is "API Changes are Expensive and Complicated" irrespective of the volume of testing. The approach does scale and the tests revealed the scope and complexity of the changes.
S.Lott
+3  A: 

Are there any 'best practices' which can be applied to the design and implementation of unit tests?

Make sure your unit tests haven't become integration tests. For example if you have unit tests for a class Foo, then ideally the tests can only break if

  1. there was a change in Foo
  2. or there was a change in the interfaces used by Foo
  3. or there was a change in the domain model (typically you'll have some classes like "Customer" which are central to the problem space, have no room for abstraction and are therefore not hidden behind an interface)

If your tests are failing because of any other changes, then they have become integration tests and you'll get in trouble as the system grows bigger. Unit tests should have no such scalability issues because they test an isolated unit of code.

Wim Coenen
+1  A: 

I think you're looking at fixing a symptom, rather than recognizing the whole of the problem. The root problem is that a true API is a published interface*, and it should be subject to the same bounds that you would place on any programming contract: no changes! You can add to an API, and call it API v2, but you can't go back and change API v1.0, otherwise you have indeed broken backward compatibility, which is almost always a bad thing for an API to do.

(* I don't mean to call out any specific interfacing technology or language, interface can mean anything from the class declarations on up.)

I would suggest that a Test Driven Development approach would help prevent many of these kinds of problems in the first place. With TDD you would be "feeling" the awkwardness of the interfaces while you were writing the tests, and you would be compelled to fix those interfaces earlier in the process rather than waiting until after you've written a thousand tests.

One of the primary benefits of Test Driven Development is that it gives you instant feedback on the programmatic use of your class/interface. The act of writing a test is a test of your design, while the act of running the test is the test of your behavior. If it's difficult or awkward to write a test for a particular method, then that method is likely to be used incorrectly, meaning it's a weak design and it should be refactored quickly.

John Deters