tags:

views:

145

answers:

4

I am trying to practice TDD.

As I understand it, doing TDD should go like this

  1. I write a test list for the interface/class I am going to develop.
  2. I start with the easiest yet to implement test from my test list.
  3. The test gets written, no implementation code yet.
  4. The interface of the class gets written to make the code compile.
  5. The test gets run, giving me a failing test.
  6. The implementation gets written, making the test pass.
  7. Refactor the code written.
  8. goto 2.

The problem I have is that when I arrive at point 6 & 7, at some point in time I invariably come to the conclusion that the implementation I just wrote should be delegated to another class.

What should a true TDD'r do at this point?

  1. Leave the existing test list alone for a while and create a new one
    for the new class. (but the same problem can arise when implementing the new class)
  2. Go the interaction based way of testing and Mock the new class, continue with the testcases of the class you are working on and come back later to create a correct implementation of the mocked class.
  3. This situation should not present itself, I have not thought out my initial design well enough. (wouldn't that defeat the purpose of TDD).

I would love to know how other people handle these situations.

+6  A: 

Don't look for a one-to-one relationship between your tests and your classes. If you decide to introduce a new class, let that be a refactoring supported by the original test, and add tests in the appropriate place (where that is depends on the specific case) when you want to add functionality (or to test eventualities you need to cover that you didn't test for yet).

I would add that the main success in TDD is to get into the rhythm of red-green-refactor. When you feel the benefit of that rhythm, you have started to "get" it. That isn't to say you will find it worthwhile in all cases, but until you feel that rhythm you haven't gotten to what its advocates like about it.

And there is usually (especially in architecturally complicated applications, like n-tier applications) some amount of up-front design. Nothing sketched in stone, but enough to give the units a place to go. Of course the architecture may evolve in an agile methodology, but a general idea of the landscape needs to be there if there are multiple layers to the architecture.

EDIT: (In response to the comment). Should the new class get tested in its own right? Not necessarily. It depends if the class develops an importance of its own. When you are unit testing, you are testing a piece of functionality. It isn't an integration test just because there are two classes involved. It becomes an integration test when two units start interacting. The boundary I typically think of is if I have to set up significant state in group-of-classes A to interact with group-of-classes B, and especially if group-of-classes A calls group-of-classes B and what I am interested in testing is how B reacted to A, then I'm looking at an integration test.

Yishai
But the new class(es) should get tested in their own right, shouldn't they? If the design drives you to a point where you like to create multiple "support" classes, the unit test you've started with is becoming an integration test.
Lieven
+1  A: 

You should create a mock class. A single interface with predictable retults. So you can test the original.

Later on, you can repeat the procedure with the new class.

Gamecat
Actually this is what I think I should do but out there, there's a holy war going on about "state based" vs "interaction based testing". I don't like the fact that this solution ties your tests to a particular declaration of an interface you use. In state based testing, I can change the interface declaration of the supporting class (most likely) without having to change my testcases. Using interaction based testing, I have to change the testcases too.
Lieven
+2  A: 

When I run into this situation, I follow your solution #1. Keep recursing, making as many classes as you feel are appropriate, until you have a collection of implementations you're happy with. With experience you'll find that your designs reflect your experience, and this sort of thing won't happen as much.

David Seiler
That is pretty much what I am doing now but I don't like the fact that it distracts you from the class you were testing. After a while you go back to that class and try to figure out where you left of.
Lieven
As Yishai said, you shouldn't think in terms of testing classes. You're testing implementations of solutions to problems, and if the implementation happens to span several classes, that's fine.
David Seiler
+3  A: 

The problem I have is that when I arrive at point 6 & 7, at some point in time I invariably come to the conclusion that the implementation I just wrote should be delegated to another class.

Realizing your design would be better with a different class - that's design, and that's the point of TDD. So it's a fine thing, and it shouldn't bother you.

But it's bothering you. So what to do? Recognize that delegating to another class is a refactoring; this is something to be done after step 6, during step 7. Once you're green, refactor to a better design. You've already got the tests for the new class; they're just wired to call the original class. That's perfectly fine. After extracting the class and delegating, if you would be more comfortable having the tests call the extracted class directly: go for it. No harm. If the extracted class starts to get used by other callers, I'd recommend it, and maybe when you start calling it from other classes is a good time to do that (but if it bugs you now, do it now).

Carl Manaster
Very pragmatic, thank you.
Lieven