views:

154

answers:

6

I'm building a new application and trying to adhere to "test-first" development as faithfully as I can. I'm finding myself in situations where I need to implement/change a feature that has an effect of invalidating a number of existing unit tests. How should I be dealing with this? As I see it, there are 3 options:

  • Update or remove all existing tests to meet the new feature requirements (adding any more as necessary), then implement the feature

  • Implement the feature first, run tests to see failures, and update or remove any failed tests (adding any more as necessary)

  • Add new tests for the new feature, implement the feature, run all tests to see the old ones fail, remove or update the old tests as necessary

The first option adheres to TDD, but can be excruciatingly counter-productive. The second option is the easiest, but you wouldn't be faithfully testing first and may not be properly "covered." The third option is a compromise of both and attractive to a degree, but you run the risk of re-writing a test when you could have just updated an old one.

I don't feel like I have any clear strategy here. What do you do in these situations?

+8  A: 

I would choose one test and change it to require the new feature. If there aren't any obvious candidates, i.e., it is truly new, I would create one. I would then write the code to pass that test. At that point I would run my other tests and notice that some of them fail. At that point, I would revisit each test in turn either correcting the test to reflect the new feature (so it would pass with no other code changes) or update the test with regard to the new feature (which may require some further changes to the code under test).

tvanfosson
+3  A: 

I would create new tests for the new feature, and update existing tests to accommodate your feature. If you break an already working test, you should fix it.

Robert Greiner
+3  A: 

Implementing the feature includes writing/updating the unit tests; that's fundamental to test-driven development. So your second two options are also TDD, not just your first. In practice I suspect you'll want your third option with some mods:

  1. Write tests for the feature (since that helps you validate your API/UI for it)
  2. Write the feature
  3. Review the unit tests in that general area to see those that should break
  4. Run the tests
  5. Fix ones that break, and if there are any in your list from #3 that didn't break, fix them (they should have broken). If any broke that you didn't identify, investigate to ensure that that is, in fact, correct -- fix the test or the feature as appropriate.
  6. Profit ;-)
T.J. Crowder
A: 

Get rid of the old tests and write new ones. You may be able to borrow code from the old tests in a few places, but you are better off with tests philosophically in line with what you are trying to do than to attempt to change the nature of old test.

Tests are there to support what you are trying to accomplish, and should not work against you.

Kendall Helmstetter Gelner
A: 

I think there are two things to consider here. And I don't know if you're thinking about only one or both.

The first part is that you have change a feature since the specification (or expected behaviour) changes. In this case I think the correct thing to do is to remove all tests that describe behavior that are no longer valid. Since I'm lazy I would just comment them out or skip them for now. Then I'll start writing new tests (or uncomment/modify old ones) to start describing the new behaviour until done.

The second part has to do if your new feature changes an interface that is used by other components and their tests starts failing just because you changed the feature. In this case I would just fix the tests afterwards once the feature was finnished.

Cellfish
+1  A: 

I think all approaches are reasonable. You will get to the same result.

Some people like smaller steps, and to work more in the original intent of TDD: write a line of test, write a line of code to fix it, repeat. If this is you, work incrementally on your old tests first, evolving-- or removing-- them to the new system.

If you don't mind biting off a larger chunk, dive in an fix new stuff. I find that this is more natural, especially when pair programming when you can be a bit more bold.

May really depend on your comfort and confidence level.

I'd second the notion that ideally one change should only break one test, so you may want to refactor the test code so that this is the behavior you have. Some sort of shared setup method may be the solution.

ndp