views:

585

answers:

8

I've heard that projects developed using TDD are easier to refactor because the practice yields a comprehensive set of unit tests, which will (hopefully) fail if any change has broken the code. All of the examples I've seen of this, however, deal with refactoring implementation - changing an algorithm with a more efficient one, for example.

I find that refactoring architecture is a lot more common in the early stages where the design is still being worked out. Interfaces change, new classes are added & deleted, even the behavior of a function could change slightly (I thought I needed it to do this, but it actually needs to do that), etc... But if each test case is tightly coupled to these unstable classes, wouldn't you have to be constantly rewriting your test cases each time you change a design?

Under what situations in TDD is it okay to alter and delete test cases? How can you be sure that altering the test cases don't break them? Plus it seems that having to synchronize a comprehensive test suite with constantly changing code would be a pain. I understand that the unit test suite could help tremendously during maintenance, once the software is built, stable, and functioning, but that's late in the game wheras TDD is supposed to help early on as well.

Lastly, would a good book on TDD and/or refactoring address these sort of issues? If so, which would you recommend?

+3  A: 

TDD says write a failing test first. The test is written to show that the developer understands what the use case/story/scenario/process is supposed to achieve.

You then write the code to meet the test.

If the requirement changes or has been misunderstood, edit or rewrite the test first.

Red-bar, Green-bar, right?

Fowler's Refactoring is the reference for refactoring, strangely enough.

Scott Ambler's series of articles in Dr. Dobb's ('The Agile Edge??') is a great walkthrough of TDD in practice.

Ken Gentle
+5  A: 

The main benefits of TDD brought to refactor is that developer has more courage to change their code. With unit test ready, developers dare to change code and then just run it. If the xUnit bar is still green, they have confidence to go ahead.

Personally, I like TDD, but doesn't encourage over-TDD. That is, don't write too much unit test cases. The unit tests should just be enough. If you over unit testing, then you may find you are in a dilemma when you want to do a architecture change. One big change in production code will bring a lot of unit test cases change. So, just keep your unit test enough.

Morgan Cheng
The argument, however, is that partial TDD isn't TDD. Writing code directly without tests for it first breaks the whole TDD mantra. How is it possible to not to over-tdd if all code is written test-first?
Cybis
+1  A: 

Kent Beck's TDD book.

Test first. Following S.O.L.I.D OOP principles and a using a good refactoring tool are indispensable, if not required.

Tim Tonnesen
what is S.O.L.I.D? please explain.
Tilendor
SOLID is Robert Martins design principles from 'Agile Software Development, Principles, Patterns, and Practices': Single Responsibility Principle, Open Closed Principle, Liscov Substitution Principle, Interface Segregation Principle and Dependency Inversion Principle.
quamrana
+1  A: 

I would recommended (as others have):

Mitch Wheat
+1  A: 

Under what situations in TDD is it okay to alter and delete test cases? How can you be sure that altering the test cases don't break them? Plus it seems that having to synchronize a comprehensive test suite with constantly changing code would be a pain.

The point of tests and specs is to define the correct behaviour of a system. So, very simply:

if definition of correctness changes
  change tests/specs
end

if definition of correctness does not change
  # no need to change tests/specs
  # though you still can for other reasons if you want/need
end

So, if application/system specifications or desired behaviour changes, it is a necessity to change the tests. Changing only the code, but not the tests, in such a situation is obviously broken methodology. You might look at it as "a pain" but having no test suite is more painful. :) As others have mentioned, having that freedom to "dare" to change code is very empowering and liberating indeed. :)

Pistos
+3  A: 

changing an algorithm with a more efficient one, for example.

This isn't refactoring, this is performance optimization. Refactoring is about improving the design of existing code. That is, changing its shape to better meet the needs the developer. Changing code with the intent of affecting externally visible behavior is not refactoring, and that includes changes for efficiency.

Part of the value of TDD is that your tests help you hold the visible behavior constant while changing the way you produce that result.

Jay Bazuzi
I know. Switching algorithms is an optimization, not a refactoring, issue. Refactoring is rearranging code - moving behavior to more appropriate classes, splitting large classes into smaller more cohesive ones, etc... It just that many online articles don't show realistic examples of this.
Cybis
+7  A: 

One thing you need to keep in mind is that TDD is not mainly a testing strategy, but a design strategy. You write the tests first, because that helps you come up with a better decoupled design. And a better decoupled design is easier to refactor, too.

When you change the funcionality of a class or method, it's natural that the tests have to change, too. In fact, following TDD would mean that you change the tests first, of course. If you have to change a lot of tests to just change a single bit of functionality, that typically means that most tests are overspecifying the behavior - they are testing more than they should test. Another problem could be that a responsibility isn't well encapsulated in your production code.

Whatever it is, when you experience many tests failing because of a small change, you should refactor your code so that it doesn't happen again in the future. It's always possible to do that, though not always obvious how to.

With bigger design changes, things can become a bit more complicated. Yes, sometimes it will be easier to write new tests and discard the old ones. Sometimes, you can at least write some integration tests that test the whole part that gets refactored. And you hopefully still have your suite of acceptance tests, that are mostly unaffected.

I haven't read it yet, but I have heard good things about the book "XUnit Test Patterns - Refactoring Test Code".

Ilja Preuß
What do you do in cases where a single algorithm has many, many boundary conditions? Parsing text with a non-trivial BNF grammar is a good example. You have a parser half-finished, but then need to change the grammar slightly. Because your syntax tree changes, all your test break. Damn 300 char lim
Cybis
All the tests don't *need* to break. You need to change your design so that it follows the Single Choice Principle - there should be exactly one place in your parser that is affected by this small change - and only the tests for this part of the parser should need to change.
Ilja Preuß
+2  A: 

Plus it seems that having to synchronize a comprehensive test suite with constantly changing code would be a pain. I understand that the unit test suite could help tremendously during maintenance, once the software is built, stable, and functioning, but that's late in the game wheras TDD is supposed to help early on as well.

I do agree that the overhead of having a unit test suite in place can be felt at these early changes, when major architectural changes are taking place, but my opinion is that the benefits of having unit tests far outweigh this drawback. I think too often the problem is a mental one - we tend to think of our unit tests as second class citizens of the code base, and we resent having to mess with them. But over time, as I've come to depend on them and appreciate their usefulness, I've come to think of them as no less important and no less worthy of maintenance and work as any other part of the code base.

Are the major architecural "changes" taking place truly only refactorings? If you are only refactoring, however dramatically, and tests begin to fail, that may tell you that you've inadvertantly changed functionality somewhere. Which is just what unit tests are supposed to help you catch. If you are making sweeping changes to functionality and architecture at the same time, you may want to consider slowing down and getting into that red/green/refactor groove: no new (or changed) functionality w/o additional tests, and no changes to functionality (and breaking tests) while refactoring.

Update (based on comments):

@Cybis has raised an interesting objection to my claim that refactoring shouldn't break tests because refactoring shouldn't change behavior. His objection is that refactoring does change the API, and therefore tests "break".

First, I would encourage anyone to visit the canonical reference on refactoring: Martin Fowler's bliki. Just now I reviewed it and a couple things jump out at me:

  • Is changing an interface refactoring? Martin refers to refactoring as a "behavior-preserving" change, which means when the interface/API changes then all callers of that interface/API must change as well. Including tests, I say.
  • That does not mean that the behavior has changed. Again, Fowler emphasizes that his definition of refactoring is that the changes are behavior preserving.

In light of this, if a test or tests has to change during a refactoring, I don't see this as "breaking" the test(s). It's simply part of the refactoring, of preserving the behavior of the entire code base. I see no difference between a test having to change and any other part of the code base having to change as part of a refactoring. (This goes back to what I said before about considering tests to be first-class citizens of the code base.)

Additionally, I would expect the tests, even the modified tests, to continue to pass once the refactoring is done. Whatever that test was testing (probably the assert(s) in that test) should still be valid after a refactoring is done. Otherwise, that's a red flag that behavior changed/regressed somehow during the refactoring.

Maybe that claim sounds like nonsense but think about it: we think nothing about moving blocks of code around in the production code base and expecting them to continue to work in their new context (new class, new method signature, whatever). I feel the same way about a test: perhaps a refactoring changes the API that a test must call, or a class that a test must use, but in the end the point of the test should not change because of a refactoring.

(The only exception I can think of to this is tests that test low-level implementation details that you may want to change during a refactoring, such as replacing a LinkedList with an ArrayList or something. But in that case one could argue that the tests are over-testing and are too rigid and fragile.)

Scott Bale
I never understood why people say "refactoring shouldn't break tests because behavior doesn't change". Interfaces change! If you have a large function and a couple dozen unit tests for it, testing different boundary conditions, they all break as soon as you refactor the function into smaller ones!
Cybis
Why can't I edit comments? Arg.What I meant is that tests break if you refactor interfaces such that you need to call different functions to access the same behavior (this type of refactoring can improve readability and shouldn't be avoided).
Cybis
Thanks for the comments @Cybis, I've updated my answer in response to them.
Scott Bale
That is an excellent explanation. Thank you. Would it be fair to change the accepted answer? lol. One more thing, Is it typical for unit test to be so tightly coupled to the production code? For example, having a dozen (or more) tests for something used only once or twice?
Cybis
Thanks @Cybis. IMHO it's not so important how frequently some method is used. But if a method requires dozens of tests to test all the corner cases, maybe that method is doing too much and should be refactored. In which case, typically, the tests have to move around to new homes as well.
Scott Bale