views:

282

answers:

9

I read the latest coding horror post, and one of the comments touched a nerve for me:

This is the type of situation that test driven design/refactoring are supposed to fix. If (big if) you have tests for the interfaces, rewriting the implementation is risk-free, because you will know whether you caught everything.

Now in theory I like the idea of test driven development, but all the times I've tried to make it work, it hasn't gone particularly well, I get out of the habit, and next thing I know all the tests that I had originally written not only don't pass, but they're no longer a reflection of the design of the system.

It's all well and good if you've been handed a perfect design from on high, straight from the start (which in my experience never actually happens), but what if halfway through the production of a system you notice that there's a critical flaw in the design? Then it's no longer a simple matter of diving in and fixing "the bug", but you also have to rewrite all the tests. A fundamental assumption was wrong, and now you have to change it. Now test driven development is no longer a handy thing, but it just means that there's twice as much work to do everything.

I've tried to ask this question before, both of peers, and online, but I've never heard a very satisfactory answer. ... Oh wait.. what was the question?

How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space? How do you make the TDD practice work for you instead of against you?

Update: I still don't think I fully understand it all, so I can't really make a decision about which answer to accept. Most of my leaps in understanding have happened in the comments sections, not in the answers. Here' s a collection of my favorites so far:

"Anyone who uses terms like "risk-free" in software development is indeed full of shit. But don't write off TDD just because some of its proponents are hyper-susceptible to hype. I find it helps me clarify my thinking before writing a chunk of code, helps me to reproduce bugs and fix them, and makes me more confident about refactoring things when they start to look ugly"

-Kristopher Johnson

"In that case, you rewrite the tests for just the portions of the interface that have changed, and consider yourself lucky to have good test coverage elsewhere that will tell you what other objects depend on it."

-rcoder

"In TDD, the reason to write the tests is to do design. The reason to make the tests automated is so that you can reuse them as the design and code evolve. When a test breaks, it means you've somehow violated an earlier design decision. Maybe that's a decision you want to change, but it's good to get that feedback as soon as possible."

-Kristopher Johnson

[about testing interfaces] "A test would insert some elements, check that the size corresponds to the number of elements inserted, check that contains() returns true for them but not for things that weren't inserted, checks that remove() works, etc. All of these tests would be identical for all implementations, and of course you would run the same code for each implementation and not copy it. So when the interface changes, you'd only have to adjust the test code once, not once for each implementation."

–Michael Borgwardt

A: 

it's no longer a simple matter of diving in and fixing "the bug", but you also have to rewrite all the tests.

A fundamental creed of TDD is to avoid duplication both in the production code AND in the test code. If a single design change means you have to rewrite everything, you weren't doing TDD (or not doing it correctly at all).

Ideally, in a well-designed system with proper separation of concerns, design changes are local, just like implementation changes. While the real world is rarely ideal, you still usually get something in between: you have to change some of the production code and some of the tests, but not everything, and the changes are mostly simple and may even be done automatically by refactoring tools.

Michael Borgwardt
Can you expand on this answer like.... A lot? I cannot connect it to reality at all.
Breton
So what happens when you have an interface that's implemented by say 30 other classes. Now that design has changed (say client changed their mind, something outside your control) and that interface is no longer adequate and needs to be heavily modified. Now you need to modify 30 classes and 30 test fixtures that were testing whether those classes properly implemented the interface.
Davy8
An interface specifies behaviour, so while some things would be different, most of the tested behaviour should in fact be identical across the implementation classes, so there should only be one place that needs to be changed. A concrete example where this plays out perfectly: Java's Map interface. The specified behaviour is identical across implementations, so there would be one test class run three times for HashMap, TreeMap and LinkedHashMap, with a little extra that tests the specified iteration orders of the latter two.
Michael Borgwardt
"An interface specifies behavior", you are operating on a vastly different definition of "interface" than I am. Just so that we can get on the same page, how do you define "implementation", because you've just used up the word "interface" for my definition of "implementation". Or perhaps you see them as synonyms? I don't mean to be a pain, but I don't understand you at all, Michael.
Breton
I've read the comment again, like 5 or 6 times, and it makes less and less sense every time I read it. Pls help.
Breton
If an interface does not *specify* behaviour, then what exactly would you test to determine whether "classes properly implemented the interface"? I define "interface" as an abstract, high-level specification of behaviour, and "implementation" as a concrete, low-level, well, *implementation* of the behaviour. Again the Map interface: says things about inserting, removing, looking up elements. The TreeMap implementation has to deal with all the nitty gritty stuff about balanced trees, which are irrelevant to the specified behaviour.
Michael Borgwardt
Stying with the example: A test would insert some elements, check that the size corresponds to the number of elements inserted, check that contains() returns true for them but not for things that weren't inserted, checks that remove() works, etc. All of these tests would be identical for all implementations, and of course you would run the same code for each implementation and not copy it. So when the interface changes, you'd only have to adjust the test code once, not once for each implementation.
Michael Borgwardt
Ahh now the peices are falling into place.
Breton
I still have one lingering question though. If the interface changes, doesn't that require you to change every single bit of code that depends on that interface, including the tests? I'm going a bit off track here, but do you know of some way to decouple an interface in a way that wouldn't require that?
Breton
Yes, the tests would need to change, but in most cases only once for the interface, not once for each implementation. As for the code that uses the interface, not all code may use the part of it that changes. Of course, if every method in the interface changes then that *is* a mountain of work (there's still probably a lot of automated refactoring the IDE can do for you) but it's hard to imagine a real life case where that would happen. TDD could actullay help realize problems with the iterface sooner so that less dependant code is written when you find out that you need to change it.
Michael Borgwardt
+3  A: 

One of the practices of TDD is the use of Baby Steps (which could be very boring in the beggining) which is the use of really small steps in order for you to understand your problem space and make a good and satisfactory solution for your problem.

If you already know the design of your application you aren't doing TDD at all. We should design it while doing your tests.

So the suggestion I would give is for you to concentrate on the baby steps in order to get a proper testable design

Diego Dias
Good point about what TDD really is. Saying "I have to rewrite my tests because the design changed" doesn't make sense; if you are doing TDD, the tests represent the design.
Kristopher Johnson
If the tests are the design, then how do you know whether the design works? I'm not sure how to phrase this question. If the tests represent your assumptions, the tests themselves cannot tell you whether the assumptions are correct, without some real code, can they?
Breton
Tests don't represent your assumptions. Tests represent what you want the code to do.
Kristopher Johnson
Isn't that what the code does?
Breton
No, code does what it does, which may or may not be what you want it to do, or what you need it to do. Writing the tests first gives you a good definition of what you want the code to do, and running the tests verifies that it does it. But if that isn't helpful to you, then don't do it.
Kristopher Johnson
How is writing a test any better than writing the code itself? It's just some code too. How do I know that it's really testing that the code is doing what I want it to do without a unit test test?
Breton
Sorry, that was a silly question. What I really want to know is, why would I want to rewrite two lots of code everytime the design changes, instead of just one? You're not really selling the value of tests to me. And keep in mind, I started out thinking they're a good idea, and you're starting to convince me that they aren't at all.
Breton
If writing a unit test test helps you figure out what your code needs to do, then by all means write a unit test test. Remember TDD is about _design_, not about fool-proof tests. If it isn't helping you with your design, then you're not doing TDD.
Kristopher Johnson
And I'm not selling anything. A lot of people find TDD helpful, but if you don't, then don't do it.
Kristopher Johnson
Please go back and read the comment I quoted in the OP. Now if there's going to be people telling me that I wouldn't be having all these problems I'm having, if only I would do TDD *properly*, how do you expect me to respond? "I'm Sorry, but you're full of shit." ? I'm really leaning in that direction. But some people, as you say, really do find value in it. I just desperately want to know why.
Breton
Anyone who uses terms like "risk-free" in software development is indeed full of shit. But don't write off TDD just because some of its proponents are hyper-susceptible to hype.I find it helps me clarify my thinking before writing a chunk of code, helps me to reproduce bugs and fix them, and makes me more confident about refactoring things when they start to look ugly. So TDD might be causing me to type more code, but I'm more productive than I am in code-first-ask-questions-later mode.But TDD is not a silver bullet, and doesn't replace other design and development skills.
Kristopher Johnson
+1  A: 

I don't think any real practitioner of TDD will claim that it completely eliminates the possibility of error or regression.

Remember that TDD is fundamentally about design, not about testing or quality control. Saying "all my tests pass" does not mean "I'm finished."

If your requirements or high-level design change drastically, then you may need to throw away all your tests along with all the code. That's just how things are sometimes. It doesn't mean that TDD isn't helping you.

Kristopher Johnson
This answer kind of single handedly eliminates all the supposed advantages of TDD. If this is all true, what advantages do you get out of TDD, if it doesn't eliminate error, it's not about quality control, and causes lots of tasks to take significantly longer?
Breton
Moreover, how do you respond to the comment that I quoted, that boasts that if you've done TDD, then you don't have to worry about breaking patchy code, as long as the tests still pass?
Breton
He just *said* that "TDD is fundamentally about design" - the supposed advantages of TDD over "regular" unit tests is that it improves design, not test coverage. And how does admitting that tests can't prevent 100% of all errors eliminate the advantage of catching 95% (or whatever) of all errors?
Michael Borgwardt
Okay so how do you respond the the comment that I quoted? Is that person wrong? Operating on incorrect assumptions?
Breton
The quoted person is indeed wrong, if that person claims that passing all tests is a guarantee that nothing has broken.
Kristopher Johnson
It's more like a strong indication (how strong depends on the coverage and quality of the tests) that nothing has broken... much stronger than you get with manual tests.
Michael Borgwardt
Michael says above "the whole point of automated tests is to provide quick feedback about possibly breaking changes. " How does this fit with "TDD is all about *design*" ?
Breton
In TDD, the reason to write the tests is to do design. The reason to make the tests automated is so that you can reuse them as the design and code evolve. When a test breaks, it means you've somehow violated an earlier design decision. Maybe that's a decision you want to change, but it's good to get that feedback as soon as possible.
Kristopher Johnson
+1  A: 

Properly applied, TDD should actually make your life a lot easier in the face of changing requirements.

In my experience, code that is easy to test is code that is orthogonal from other subsystems, and which has clearly defined interfaces. Given such a starting point, it is much easier to rewrite significant portions of your application, since you can work with confidence knowing that a) your changes will be isolated to a few subsystems, and b) any breakage will quickly show up as failing tests.

If, on the other hand, you're just slapping unit tests on your code after it has been designed, then you may well have problems when requirements change. There's a difference between tests that fail quickly when subsystems change (because they're effectively flagging regressions) and those that are brittle, because they depend on too many unrelated pieces of system state. The former should be fixable by a few lines of code, while the latter may leave you scratching your head for hours trying to unravel them.

rcoder
Might I refer back to the title of the question. What if the bug is in the interface?
Breton
In that case, you rewrite the tests for just the portions of the interface that have changed, and consider yourself lucky to have good test coverage elsewhere that will tell you what other objects depend on it.
rcoder
A: 

Continuous Integration (CI) is one key. If your tests run automatically every time you check in to source control (and everyone else sees it if they fail), it's easier to avoid "stale" tests and stay in the green.

As Mr. Dias mentioned, Baby Steps are important. You make a small refactoring, you run your tests. If tests break, you immediately determine if this is expected (design change) or a failed refactoring. When tests are truly independent (comes with practice), this is seldom very difficult. Evolve your design slowly.

See also http://thought-tracker.blogspot.com/2005/11/notes-on-pragmatic-unit-testing.html - and definitely buy the book!

EDIT: Perhaps I'm looking at this the wrong way. Say you had a legacy codebase that you wanted to redesign. The first thing I would try to do is add tests for the current behavior. Refactoring without tests is risky - you might change behavior. After that, I would start to clean up the design, in small steps, running my unit tests after each step. That would give me confidence that my changes weren't breaking anything.

At some point the API might change. This would be a breaking change - clients would have to be updated. The tests would tell me this - which is good, because I'd have to update any existing clients (including the tests).

Now that's not TDD. But the idea is the same - the tests are specifications of behavior (yes, I'm shading into BDD), and they give me the confidence to refactor the implementation while insuring that I preserve the behavior (as well as letting me know when I change the interface).

In practice, I've found TDD gives me immediate feedback on poor interface design. I'm my first client - I know when my API is hard to use.

TrueWill
I think you missed the target. What you suggest, I think, would make my stated problem vastly worse. Please explain why it wouldn't.
Breton
OK - you make a small change (refactoring) to your interface and test A immediately breaks (because you run your tests after every change). You refactor test A to pass. Repeat.Also note that with TDD, you want your tests to drive your design. When you're writing tests first, problems with the interface show up quickly - you're eating your own dog food.This all requires good tests - fast, testing only one thing per test, independent, etc. If you have 500-line test methods with huge setups, maintaining those will be difficult. Read books on testing and practice!
TrueWill
Please go back and read my original question. I don't think you fully appreciate the content of it. I am not talking about some small change. I'm talking about a change in some assumption which was fundamental to the entire design of the peice. Something which requires a change not just in the code, but requires me to rewrite most of the tests. This is something that I have to do frequently as my understanding of the problem improves. Why wouldn't test driven development slow me way down during that process?
Breton
"A fundamental assumption was wrong, and now you have to change it." In my experience, that does not happen often. When it does, I update my tests. With TDD, most changes are small and incremental. Designs evolve slowly. I urge you to try it, read books/blogs on it, and stick with it for awhile. I believe you will find that your fears are unfounded. You will make the occasional misstep, but what developer doesn't?
TrueWill
Are you saying that you instantly understand on a deep level, every facet of your problem space, and the only gaps to fill as you progress are minor? You must forgive me but I am finding it hard to believe. How often do you follow up on the software you inflict upon your victims, to see how well it's working? (I say that in the most loving way possible, all of us programmers are inflicting some kind of damage on our clients/victims. )
Breton
Also keep in mind that I have tried test driven development before, and I have read some literature. Do not assume that I am someone who is not even willing to try. Assume I'm someone who is willing to learn, but I'm having difficulty understanding some aspect of it.
Breton
At the very least, I'm having difficulty understanding some specific claims about TDD.
Breton
I don't mean to offend, and apologize if I implied anything about your skill level. I have been writing unit tests for years and try to practice test-first development. I'm not an expert, and I have much to learn. That said, I've seen tangible benefits from testing over and over - subtle bugs found, the courage to make radical implementation changes (because of tests insuring no behavior changes), improved interfaces due to seeing how they play out in client code, etc. My software is used in-house, and I will hear about it if there are bugs in it.
TrueWill
I edited the post to expand upon this.
TrueWill
A: 

Coding something without knowing what will work best in the UI, while at the same time writing unittests. That is very time consuming. It's better to start out making some prototypes of the GUI to get the interaction right.. and then rewrite it with unittests (if you employer allows you).

neoneye
+1  A: 

The only true answer is it depends.

  • There are ways to do TDD wrong, such that it doesn't fit in with your environment and eats effort with minimal benefit.
  • There are ways to do TDD right, such that it both cuts costs and increases quality.
  • There are ways to something similar-but-different to TDD, which may or may not get called TDD, and may or may not be more appropriate in your particular situation.

It's a strange quirk of the market for software tools and experts that, to maximise the revenue for those pushing them, they are always written as if they somehow apply to 'all software'.

Truth is, 'software' is every bit as diverse as 'hardware', and nobody would think of buying a book on bridge-making to design an electronic gadget or build a garden shed.

soru
Okay so there's a wrong way to do TDD, and there's a right way to do TDD. what are the defining features of each?
Breton
Asking for the right way to develop and test software is like asking 'what is the way to the bus stop?'. On a global forum, if you somehow got the right answer 'turn left at the shops', it would be by coincidence.
soru
+1  A: 

I think you have some misconceptions about TDD. For a good explanation and example of what it is and how to use it, I recommend reading Kent Beck's Test-Driven Development: By Example.

Here are a few further comments that may help you understand what TDD is and why some people swear by it:

"How do you combine test driven development with a design that has to change to reflect a growing understanding of the problem space?"

  • TDD is a technique for exploring a problem space and creating and evolving a design that meets your needs. TDD is not something you do in addition to doing design; it is doing design.

"How do you make the TDD practice work for you instead of against you?"

  • TDD is not "twice as much work" as not doing TDD. Yes, you'll write a lot of tests, but that doesn't really take much time, and the effort isn't wasted. You have to test your code somehow, right? Running automated tests are a lot quicker than manually testing whenever you change something.

  • A lot of TDD tutorials present highly detailed tests of every method of every class. In real life, people don't do this. It is silly to write a test for every setter, every getter, and so on. The Beck book does a good job of showing how to use TDD to quickly design and implement something, slowing down to "baby steps" only when things get tricky. See How Deep Are Your Unit Tests for more on this point.

  • TDD is not about regression testing. TDD is about thinking before you write code. But having regression tests is a nice side benefit. They don't guarantee that code will never break, but they help a lot.

  • When you make changes that cause tests to break, that's not a bad thing; it's valuable feedback. Designs do change, and your tests aren't written in stone. If your design has changed so much that some tests are no longer valid, then just throw them away. Write the new tests you need to be confident about the new design.

Kristopher Johnson
A: 

We tend to do much less design up front with TDD, knowing it can change. I have taken projects through huge gyrations (it's a web app, no it's a RESTful server, no it's a bot). The tests provide me with the ability to refactor and restructure and evolve your code much more easily than untested code. Although it seems contradictory, it is true-- even though you have more code, you are able to make major changes and have confidence that nothing has broken in the existing functionality.

I understand your concern that fundamental assumptions changing make you throw out tests. This seems intuitive, but I personally haven't seen it. Some tests go, but most are still valid-- often a major change isn't as major as it seems at first. Plus, as you get better at writing tests, you tend to write less brittle ones, which helps.

ndp