tags:

views:

100

answers:

4

Please note I have not yet 'seen the light' on TDD nor truly got why it has all of the benefits evangelised by its main proponents. I'm not dismissing it - I just have my reservations which are probably born of ignorance. So by all means laugh at the questions below, so long as you can correct me :-)

Can using TDD leave yourself open to unintended side-effects of your implementation? The concept of "the least amount of code to satisfy a test" suggests thinking in the narrowest terms about a particular problem without necessarily contemplating the bigger picture.

I'm thinking of objects that hold or depend upon state (e.g. internal field values). If you have tests which instantiate an object in isolation, initialise that object and then call the method under test, how would you spot that a different method has left behind an invalid state that would adversely affect the behaviour of the first method? If I have understood matters correctly, then you shouldn't rely on order of test execution.

Other failures I can imagine cover the non-closure of streams, non-disposal of GDI+ objects and the like.

Is this even TDD's problem domain, or should integration and system testing catch such issues?

Thanks in anticipation....

+6  A: 

Some of this is in the domain of TDD.

Dan North says there is no such thing as test-driven development; that what we're really doing is example-driven development, and the examples become regression tests only once the system under test has been implemented.

This means that as you are designing a piece of code, you consider example scenarios and set up tests for each of those cases. Those cases should include the possibility that data is not valid, without considering why the data might be invalid.

Something like closing a stream can and should absolutely be covered when practicing TDD.

We use constructs like functions not only to reduce duplication but to encapsulate functionality. We reduce side effects by maintaining that encapsulation. I'd argue that we consider the bigger picture from a design perspective, but when it comes to implementing a method, we should be able to narrow our focus to that scope -- that unit of functionality. When we start juggling externalities is when we are likely to introduce defects.

That's my take, anyway; others may see it differently.

Jay
@Jay - how would you write a test to detect that a method has left a stream open? e.g. a test which decrypts a file and returns the plaintext content but which happens to leave an exclusive lock on the file until the decrypt stream is garbage collected? I thought tests tested interfaces, not internal implementation. What have I misunderstood here?
Neil Moss
@Neil Part of isolating a unit for testing is inversion of control. Implementing dependency injection, the class that gets content from an encrypted file would accept a stream in its constructor. By passing an abstraction instead of a concrete implementation, like `MemoryStream` or `FileStream` etc., you (1) make the code reusable and (2) allow yourself to pass a stream of your choosing in a test. After exercising the unit in your test, you can verify that the stream was closed. Preferrably, you'd use a mock implementation of stream on which you could simply verify that `Close()` was called.
Jay
@Jay - I wouldn't write a generic decrypt(stream) function which close()'d the stream it was given. Given a requirement "get the plaintext contents of a named file", I'd write a method: string GetPlaintextFromFile(string filename) {...} That method would open a FileStream, pass it to the class you describe above and then (ideally) close the FileStream. Is such a function considered even _unit_-testable? If not, how does TDD let me write such a method? And how would I validate that the stream was closed, in a TDD fashion?
Neil Moss
@Neil TDD really leans on (and helps enforces) the Single Responsibility Principle. As described, that method does not readily lend itself to testing because it instantiates a `FileStream`. If you can extract the instantiation to a factory or container then you can reduce the responsibilities of the method and verify its behaviour with respect to the stream.
Jay
+1  A: 

Good questions. Here's my two cents, based on my personal experience:

Can using TDD leave yourself open to unintended side-effects of your implementation?

Yes, it does. TDD is not a "fully-fledged" option. It should be used along with other techniques, and you should definitely bear in mind the big picture (whether you are responsible of it or not).

I'm thinking of objects that hold or depend upon state (e.g. internal field values). If you have tests which instantiate an object in isolation, initialise that object and then call the method under test, how would you spot that a different method has left behind an invalid state that would adversely affect the behaviour of the first method? If I have understood matters correctly, then you shouldn't rely on order of test execution.

Every test method should execute with no regard of what was executed before, or will be executed after. If that's not the case then something's wrong (from a TDD perspective on things).

Talking about your example, when you write a test you should know with a reasonable detail what your inputs will be and what are the expected outputs. You start from a defined input, in a defined state, and you check for a desired output. You're not 100% guaranteed that the same method in another state will do it's job without errors. However the "unexpected" should be reduced to a minimum.

If you design the class you should definitely know if two methods can change some shared internal state and how; and more important, if this should really happen at all, or if there is a problem about low cohesion.

Anyway a good design at the "tdd" level doesn't necessarily means that your software is well Built, you need more as Uncle Bob explains well here:

http://blog.objectmentor.com/articles/2007/10/17/tdd-with-acceptance-tests-and-unit-tests

Martin Fowler wrote an interesting article about Mocks vs Stubs test which covers some of the topics you are talking about:

http://martinfowler.com/articles/mocksArentStubs.html#ClassicalAndMockistTesting

mamoo
+3  A: 

TDD is not a replacement for being smart. The best programmers become even better with TDD. The worst programmers are still terrible.

The fact that you are asking these questions is a good sign: it means you're serious about doing programming well.

The concept of "the least amount of code to satisfy a test" suggests thinking in the narrowest terms about a particular problem without necessarily contemplating the bigger picture.

It's easy to take that attitude, just like "I don't need to test this; I'm sure it just works." Both are naive.

This is really about taking small steps, not about calling it quits early. You're still going after a great final result, but along the way you are careful to justify and verify each bit of code you write, with a test.

The immediate goal of TDD is pretty narrow: "how can I be sure that the code I'm writing does what I intend it to do?" If you have other questions you want to answer (like, "will this go over well in Ghana?" and "is my program fast enough?") then you'll need different approaches to answer them.

I'm thinking of objects that hold or depend upon state.

how would you spot that a different method has left behind an invalid state?

Dependencies and state are troublesome. They make for subtle bugs that appear at the worst times. They make refactoring and future enhancement harder. And they make unit testing infeasible.

Luckily, TDD is great at helping you produce code that isolates your logic from dependencies and state. That's the second "D" in "TDD".

Jay Bazuzi
@Jay Bazuzi - how does TDD offer to help me ensure that state is consistent? The end product is a program which applies logic to state, and if state is not accurately maintained, I have a suite of tests which says all is well, but a broken program. Is there a middle ground?
Neil Moss
TDD doesn't really do that. It's scope is the unit, ideally a simple class. It doesn't say "all is well" it says "each line of code you wrote does what you intended it to do."Acceptance tests are the usual approach to component and whole-program verification.And, of course, you have to use your program.
Jay Bazuzi
+2  A: 

The concept of "the least amount of code to satisfy a test" suggests thinking in the narrowest terms about a particular problem without necessarily contemplating the bigger picture.

It suggests that, but that isn't what it means. What it means is powerful blinders for the moment. The bigger picture is there, but interferes with the immediate task at hand - so focus entirely on that immediate task, and then worry about what comes next. The big picture is present, is accounted for in TDD, but we suspend attention to it during the Red phase. So long as there is a failing test, our job is to get that test to pass. Once it, and all the other tests, are passing, then it's time to think about the big picture, to look at shortcomings, to anticipate new failure modes, new inputs - and write a test to express them. That puts us back into Red, and re-narrows our focus. Get the new test to pass, then set aside the blinders for the next step forward.

Yes, TDD gives us blinders. But it doesn't blind us.

Carl Manaster
@Carl - again I don't _get_ that. I don't see where _design_ comes into that cycle. Why not take a breath and contemplate such possibilities when writing the code the first time? How much time do you give over to refactoring (ie doing it again) before deciding "that's good enough" ?
Neil Moss
@Neil, because we do big things, bigger than our minds can easily hold at once. Refactoring *isn't* "doing it again". Do one thing, do it right and verifiably. Now add to it, without breaking it. That's easier (for some of us, at least) than trying to do everything at once. We make fewer mistakes, and our mistakes are smaller, than when we work in the way you suggest.
Carl Manaster
@Neil Carl's last comment is gold. The truth is, one just has to drink the Kool-Aid at first and give it an honest go. After some practice the benefits reveal themselves more clearly than we are able to articulate. Many of the benefits are not TDD-specific, but more generally about just having a suite of unit tests, but more often than not it turns out to be "test first or not at all."
Jay