tags:

views:

280

answers:

7

What are some situations where unit testing and TDD and the like are more trouble than they're worth?

Some things I've come up with are:

  • When generating test data is tricky: Sometimes, being able to come up with valid, non trivial test data is a challenge in itself.
  • When the only practical way of verifying correctness of the code is to run it.
  • When you're testing visual elements of the design.

What are some other cases?

+1  A: 

I'm not sure that it's ever unhelpful. In some cases, it may be more difficult and you may choose not to use it -- in the case of the visual layout of your UI, for instance. There may also be times when it is wasted effort -- for example, unit testing designer-generated code or frameworks not written by you. Generating data shouldn't be an impediment to unit testing. Your tests should be small enough and well-focused enough that you don't typically need to generate an entire dataset for any single test, so mocking is a very useful technique in these situations. If I find myself mocking the same things over and over I will sometimes coalesce it into a fake database class that all of the tests can rely on.

Neither unit testing nor running your code will verify its correctness. Unit testing can help eliminate bugs, especially with TDD, and make sure that bugs that are found are fixed. If you need to make sure that your code is correct, you'll need to apply different, logic-based techniques to prove correctness. These are outside the scope of unit testing.

tvanfosson
Heh. Beat me by 32 seconds.
Otto
A: 

I wouldn't say there are any cases where automated testing is unhelpful.

There are certainly those where it s less helpful. I don't think creating fixture data is one of them. There are tools to help, like factory_girl (in Ruby, at least). In fact, if your model is so complicated that you need to create a dozen objects with all sorts of associations, I'd consider that a code smell and maybe the model isn't as concise as it could be.

That said, there are a couple cases where the drawbacks possibly outweigh the benefits. I wrote some code the other day that forks an external process. I don't care about the output, I don't care about the return code, I don't even care if it worked right, it's totally fire and forget and another process will come clean up if something didn't work right, later.

In that case I didn't bother writing any tests because the benefit wasn't worth the time of setting up a fake external program to verify my arguments, etc, etc.

Otto
+3  A: 

I believe your first two points are not valid.

  • Creating test data may be a challenge (in fact, it's usually a major part of writing unit tests), but that's simply something you have to accept, not a reason to give up on unit tests. And it can't be impossible, otherwise how would you ever know your app is working correctly?
  • Unit tests run the code in order to verify its correctness - I don't see the problem.

There certainly are aspects of an application that cannot be unit-tested - visual layout (screen or print) is one such aspect, as is usability in general - things that cannot really be formally specified.

A situation where unit testing may not be applicable is when you're faced with an existing application that was not developed with testability or even modularity in mind (Big Ball of Mud Anti-pattern). But even then, if you know you'll have to maintain and extend this beast for a significant length of time, it is nearly always possible and useful to find a way to automatically test at least some parts of the application. Nobody says you have to write a test suite that achieves 100% code coverage before doing anything else.

Michael Borgwardt
A: 

Just some random thoughts on this:

  1. I assume Microsoft does not have a Unit-Test for it's various ways of shutting down a computer. Could be done with Virtuals, but it's probably not worth it
  2. For Hardware Makers: Ensuring that drivers work for different hardware is probably done manually, too (hi there nvidia, you broke my gfx card ;) )
  3. Run-once shell scripts. Just tweak them till they work

But still those are corner cases. I also think they are almost always possible and usually pay you back more in the long run than you invested at the beginning.

Lemmy
A: 

I'm a firm believer in thoughtful testing; however, I find Unit Testing and TDD to be mostly a waste of time.

From an empirical standpoint:

  1. There is no empirical evidence that demonstrates higher quality code.
  2. There is no empirical evidence that demonstrates higher productivity.
  3. There is no empirical evidence that demonstrates cost savings.
  4. There are 'stories' presented in a pseudoscientific manner that suggest TDD is beneficial, but there are no control groups and there are no real metrics.

The benefits of TDD:

  1. The evangelists who 'know' TDD benefit by promoting their expertise.
  2. The software groups who sell/promote Unit Testing tools benefit.
  3. There may be some benefit to Unit Testing if your developers are not high-caliber.

    • Unit Testing only catches the most obvious errors
    • If a developer consistently finds bugs via Unit Testing, I would replace them.
    • If I were to outsource development to a body shop in Bangalore, I'd implement Unit Testing. Otherwise, I'll stick to working with strong developers - these guys are much more cost effective in the long run.

Subjective Analysis:

  1. If you listen to the arguments made by TDD proponents, you can readily replace TDD with prayer and the validity of the reasoning does not change...
  2. Unit Tests are code - you are doubling/tripling the size of your code base... That time could probably be better spent analyzing your code.
  3. High quality software comes from having antagonistic/cooperative teams. The same entity that writes the code has no business testing the code - that should be the job of a QA analyst.
  4. High quality and cost effective software comes from following good design principals - SOLID/GRASP/GoF
  5. After reviewing Unit Testing and TDD, the real world analogy I would draw is... it's sort of like running a check list on yourself with items like:
    • inhaling air check
    • exhaling air check
    • left foot forward check
    • right foot forward check
    • blink eyes check
    • insert gum check
    • close mouth check
    • open mouth check
    • iterate until swallow check...
    • Yes you might actually find an issue, but you'll never figure out anything of consequence without spending a huge effort to code for it.
  6. Jebus told me that TDD is a false god.
mson
A: 
* When generating test data is tricky: Sometimes, being able to come up with valid, non trivial test data is a challenge in itself.

When you need the system to be "in context" with "full data set" then what you are doing is not unit testing. It's testing, but you strain the "unit" bit quite a lot. You need to have smaller tests for that. The hard thing with TDD is getting your code in such a shape that you can test it in the small. It is valuable, but not easy. If you do test-after (which is NOT TDD) then it's almost impossible to avoid your situation.

So when you want to test something larger than a unit (ie a method on a class) then you will want to use something like UATs or the like. But in that scenario, you still want tests on individual functions as you would have if you practice TDD.

* When the only practical way of verifying correctness of the code is to run it.

But the code is running in the unit test. Do you mean when it's running in context or something else?

* When you're testing visual elements of the design.

I thought this is what your second bullet point was getting at, but I guess not. It is hard and often unrewarding to try to test layout in UTs. Such tests are fragile.

So it may not be valuable to use UT to do large subsystem tests or to verify screen layout in some circumstances. But even if so, it is incredibly valuable for the 90% of your work that remains.

tottinge
A: 

I think Michael summed it up quite nicely: "things that cannot really be formally specified". Turns out there are lots of things that cannot be formally specified. Usability is one example (although once you decided which behavior is usable, you can and should of course test that behavior!). Somewhat paradoxically, lots of numbercrunching tasks cannot be formally specified: Take for example a weather forecast. The goal is of course to predict tomorrow's weather, but that's not a formal specification. So you can either test if the algorithms you use do what they should do (calculating averages, inverting matrices, stuff that can be formally specified), but then your weather forecast program could pass all the tests and still be wrong 90% of the time. Or you could use lots of historical data to test if the algorithm yields good predictions, but this is dangerous, because it can easily lead to an algorithm that's only accurate for the historical data you used, not in general. And it would probably mean that your unit tests take hours or days to run. Even worse, your algorithm might have parameters that have to be "tweaked" e.g. for the measurement instruments used, and the optimal parameters might not be the same ones for every algorithm, so the unit-tests would need manual interaction for finding good parameters. Possible in theory, but probably not very useful. I guess the same arguments would apply to OCR, ICR, many signal processing task, face recognition (and many other image processing tasks), typical photoshop tools like "red eye removal", or search engine ranking algorithms (just to name a few examples).