views:

327

answers:

6

Coding test-first, I find that perhaps 3/4 of my code is unit tests; if I were truly extreme, and didn't write a line of code except to fix a failing unit test, this ratio would be even higher. Maintaining all these unit tests adds a huge amount of inertia to code changes. Early on, I suck it up and fix them. As soon as there's pressure, I end up with a broken_unit_tests directory to revisit 'when there's time'. It feels like TDD is putting in high coverage too soon, before the design has had time to crystallize.

How do I find my way out of this dilemma, and start welcoming changing requirements like I'm supposed to?

+1  A: 

I guess the idea is to throw unit tests that don't test appropriate behavior anymore away and write new ones. It's also good to write unit-tests in a way that they reflect behavior rather than implementation. So they will be more independent from the design.

Generally I'm not an advocate of TDD anyway. :)

Corporal Touchy
+3  A: 

Unit tests should be fairly immutable.

If you're writing a test, writing code to get that test to pass, and breaking your other tests, then your new code should be considered "wrong".

Now obviously, in some cases, you may need to rewrite a test if you change an API contract, but for the most part, you should not consider "rewrite the test" as a valid way to do TDD.

warren_s
+4  A: 

I think you've got it the other way around. When implementing a change which could break the unit test you should update the unit tests first. This way you will never get a broken unit test and a working code. You will either have a failing unit test because the code is not ready yet or both parts will work fine.

If you believe that's an overhead just think of the time you'll save on the bugfixing in the future.

Also you could try to work in short cycles. I.e. instead of

  1. Do a lot of changes
  2. Fix a lot of unit tests
  3. Repeat

Try

  1. Plan a small change
  2. Change the relevant unit test(s)
  3. Change relevant code
  4. Repeat

It's difficult to work your way through a huge backlog of unit tests when there is a deadline looming and a manager over your shoulder. Doing the code and the tests at the same time is actually easy when you get into the habit.

Ilya Kochetov
+1  A: 

Your test are probably not focused enough, or you have too many dependencies in your systems.

When I change pretty important aspect of my code, what I do most of the time, is develop a new test suite to make my change, but without breaking the old one, so the software work the old and new way at in parallel, once i am happy with my refactoring, I delete the old way code and the test of the old way.

Not sure it's 100% clear...

pmlarocque
+8  A: 

Keeping the aspect of Programmer Discipline aside... (it's a personal thing if you're okay with checking-in without doing a buddy build or without fixing all the tests. Agile assumes high discipline.. and Courage&Support to stay on the right path under Pressure :),

If you find that making a single change fails multiple tests, its a smell that something is wrong with your tests. Fragile Tests are common when you start out with TDD... if you spend more time fixing your tests than fixing your code... stop, breathe and reflect. Fix the disease rather than the symptom.
If you have some code snippets, we could discuss. As it stands I dont think I can help you out much...
Guideline: A test should fail for only one reason.. Conversely every failing test should point out the exact unique location of the defect. Two tests should not fail due to the same change. Unless you're making architecture level sweeping changes, this should be rare.

Gishu
"Fragile Tests" is the term I was looking for. Here, have a cookie!
warren_s
A: 

OP here - looks like this question has run its course. Thanks to everyone who posted. I'm not choosing an answer because they all have useful advice, and I can't see any as being technically 'the right answer' for future readers.

fizzer