views:

120

answers:

8

If I write test code to test my code, the test code might have bugs so it needs to be tested and when my code changes my test code may have to change. Repeat ad infinitum.

How is this problem solved (in practice and in theory)?

A: 

Write really good test code. Document it thoroughly, think about it carefully as you're writing it, and be methodical in your approach. Also, if you keep individual tests as short as possible, you can increase code coverage while keeping the chances of creating a bug as small (and visible) as possible.

Stefan Kendall
@Stefan. If this worked, then you could apply this same idea to the original code. If the original code can have bugs and needs to be tested, then surely the test code can have bugs.
+2  A: 

In practice, by ensuring that the test fails before it passes. TDD doesn't magically remove all ability for there to be bugs in your code, it hopefully reduces the bug count, but it is more important as a design technique.

Where it really improves bug count is when you refactor. The tests have many times saved my bacon when I refactored and broke some old behavior that was established by a test.

When a test fails before it passes, you can be assured that the code you are implementing actually behaves in a way that makes the test pass, making the test a valid one. If you change code that breaks tests, then you need to think about which one is right, and adjust accordingly.

When I read your question, I see an underlying expectation that TDD will prevent all bugs. It won't. But it will prevent some. More importantly it will prevent bugs when you refactor, allowing you to improve your design over time without fear of regressing.

But where TDD really shines is in driving design. It can ensure that the design properly factors dependencies, that it is modular, and that it does what you expected it to do (as opposed to it doing the right thing - that has to be part of integration or acceptance testing).

Edit: (In response to comment) I understood your question, and was trying to answer it. Since I wasn't that successful, I'll try one more time.

If a test is first seen to fail and then pass, the basic notion that it has a bug that it fails to test anything is handled (the code validates the test and the test tests the code). It clearly depends on production code to pass, so it tests something. More than that, it can't really do. There are further layers of testing (integration, acceptance, perhaps a general QA) that will address more profound issues.

But my overriding point was to challenge what I understand to be the premise of the question: How can TDD deliver 100% bug free code if the tests themselves can have bugs. My answer to that premise is that it can't. It does something else (drives design). I hope that clarifies things.

Yishai
@Yishai. I think that you missed the point of the question. My point is that the test code itself can have bugs and therefore needs to be tested Hence, there is an infinite regress.
A: 

Unit tests are not meant to be 100% protection against bugs. It's merely a technique to improve the qulity of the code. Having never practiced TDD one would think that the overall quility of the code won't improve if one adds a test that is meant to pass, DOH!

To address the testing of the unit tests. This is precisely the reason why you make the test fail first - it is to improve the quality of the test. Again there are no absolutes here, but the practice does help. Like thers said, tests test the code and the code tests the test. This is better than no tests at all.

As far as going back to test and fixing them up when they break, well, that is meant to happen. If the tests are good (i.e. ver granular) and test a very narrow use case, they will alert you as where the real code will break.

Unit tests are invaluable as a regression tool. Both macro-regression (as in days, weeks, years after the code is written) and micro-regression (whilst writing the code). I am totally inventing these terms by the way. If they are actuall terms used by Martin Fowler or Uncle Bob, well that just means I am as brilliant as they are :) Just kidding.

So long time regression is fairly well understood, you change the code months after you wrote it and the tests will alert you as to what's broken. Micro-regression on the other hand is when you write the code and slowly add functionality. If you don't have tests then chances are you are doing it from where the code is going to be used and just modify that code to go through various scenarios. There is a high risk that later code will break earlier use cases.

With TDD, all the use cases (tests) for functionality you implement stay. This means that you are pretty much guarnteed that whatever you add later will not break the eralier code.

Igor Zevaka
A: 

Write your tests as simply as possible.

Then you'll have a virtuous cycle where your code becomes as simple as possible to test the tests.

Or so the prevailing theory goes.

MSN
+4  A: 

The test tests the code, and the code tests the test.

When you write a test, then write enough code to run it, the test fails. Then you write the code to make it pass. If it doesn't go this way - if it passes before you write the code, or if it fails after, something's gone wrong. If the test passes before the code is written, obviously there's something wrong with the test - fix it, until you've got red and got the failure you expected. If the test was red and doesn't go green after writing the code, then one of two things is wrong: the test, or the code. Figure it out, fix it, and move ahead.

Carl Manaster
+1 In a similar vein, Stu Halloway draws a parallel to double-ledger accounting.
Michael Easter
Interesting. When the code changes, the test changes as well, right? So this doubles the amount of work but there is no infinite regress, correct?
@Michael Do you have a link to the Stu Halloway double-ledger stuff?
@unknown(google): no, the work is not doubled. Yes, you may have to write more code - both the function and the test that tests the function - but the real work is figuring out what your code should do. Writing the test first lets you do that figuring up front, then all you have left is implementation details. So it's really the same work, just separated and reordered. And better.
Carl Manaster
re: link. I don't have a link. He said it off-the-cuff during a NFJS talk, as a metaphor. re: doubled? No, it isn't doubled in 'amount', despite the insight of the metaphor. It may seem that way but it is a different mindset.
Michael Easter
Here is a quote from Uncle Bob http://unhandled-exceptions.com/blog/index.php/2009/02/15/uncle-bob-tdd-as-double-entry-bookkeeping/
Yishai
A: 

It sounds like you're considering that writing unit tests is proving correctness. These are 2 separate things. Test code provides a known set of inputs and verifies the output - they should be pretty simple. Set up inputs, execute, verify outputs. Tests only check that the expected output is generated for the scenarios you've implemented in tests. If you're writing tests that are so complicated that they themselves need to be tested, then you need to vastly simplify your tests. You shouldn't be writing test code for your test code.

justinlatimer
+1  A: 

The idea is that unit tests do not have complex logic... each should basically consist of an action and an assertion or assertions about the result or behavior of the action. Not much if any looping or branching logic... nothing there's likely to be any error in that isn't so obvious you don't miss it. So yes, as others have said, you just do not write tests for your tests.

ColinD
+1  A: 

I recently had a debate with someone about this at work. The person raising the issue had a fair point but, ultimately, unit testing is a practical, proven technique. TDD adds a further "test first" aspect which further reduces the chance of the test being incorrect, as it should fail before any production code is written. While it's true that tests can be bugged, in my eyes, that issue is more of a philosophical debate than a practical barrier.

The argument that you need test code to test the tests only holds any weight with me when the code in question forms a re-usable framework or utility testing class that other unit test fixtures will use.

Example: Prior to NUnit 2.5, we had re-usable data-driven testing class. It was used by and relied upon by many other tests, plus it was fairly complicated. Certainly complicated enough to have bugs. So we unit tested the testing class. .. and then when NUnit 2.5 came out, we swapped all the tests to use it and threw away our own implementation.

On the other hand, well written test code is straightforward, contains no special logic / branching and helps build a scaffolding around the production code. Where possible, we should always leverage the functionality in other, well-tested frameworks rather than doing any logic in our test class. If regular unit test code gets complicated enough to make someone think it may require testing, I would argue the approach is likely flawed.

Mark Simpson