+3  A: 

The one big advantage to making the test fail first is that it ensures that your test is really testing what you think. You can have subtle bugs in your test that cause it to not really test anything at all.

For example, I once saw in our C++ code base someone check in the test:

assertTrue(x = 1);

Clearly they didn't program so that the test failed first, since this doesn't test anything at all.

David Norman
+2  A: 

uh...i read the TDD cycle as

  • write the test first, which will fail because the code is just a stub
  • write the code so that the test passes
  • refactor as necessary

there's no obligation to keep writing tests that fail, the first one fails because there's no code to do anything. the point of the first test is to decide on the interface!

EDIT: There seems to be some misunderstanding of the "red-green-refactor" mantra. According to wikipedia's TDD article

In test-driven development, each new feature begins with writing a test. This test must inevitably fail because it is written before the feature has been implemented.

In other words, the must-fail test is for a new feature, not for additional coverage!

EDIT: unless you're talking about writing a regression test to reproduce a bug, of course!

Steven A. Lowe
Well, you should re-read whatever you read (or read something else ;), as the TDD cycle in fact is *red-green-refactor* - that is, write a failing test, make it pass, refactor the code, repeat.
Ilja Preuß
@[Ilja Preuß]: reference, please. My understanding is that the file test is to establish the interface before you code a NEW FEATURE. What you're advocating makes no sense.
Steven A. Lowe
@[Ilja Preuß]: see edits; failing tests are for new features!
Steven A. Lowe
In my opinion, every defect should be exposed via a new unit test whenever possible, as well, before correcting it. Limiting unit testing to just new functionality is short-sighted.
joseph.ferris
@[Joseph.Ferris]: I agree, and please do not construe anything in the above to the contrary; the context of the original question was new developmenet, not regression tests. Edited for clarity, thanks!
Steven A. Lowe
+2  A: 

Hard core TDDers would say you always need a failing test to verify that a positive test isn't a false positive, but I think in reality a lot of developers skip the failing test.

Jim Anderson
+3  A: 

If you are writing a new piece of code, you write the test, then the code, It means that the first time you always have a failing test (because it executed against a dummy interface). Then you may refactor several times, and in that case you may not need to write additional tests, because the one you have may be already enough.

However, you may want to maintain some code with TDD methods; in this case, you first have to write tests as characterization tests (that by definition will never fail, because they are executed against working interfaces), then refactor.

Roberto Liffredo
A: 

You have to be sure your test is actually testing something. I would suggest having the test pass in 4,3,2,1 and refactor the method return the same exact thing. That is a failure and proves that your method is at least returning a list, albeit an unsorted one. Now go ahead and add back the bubble sort to make the test pass.

Andrew Cowenhoven
That's not an answer to the question on what to do when you already implemented the algorithm.
Ilja Preuß
I don't quite understand. I'm basically asking whether it's ridiculous to change working code (and possibly have to design a completely new algorithm) just to make a test fail, then revert the code back to what it was again to pass. It seems like a lot of effort just to introduce an error.
Cybis
If you think the code is solid don't change it and don't bother with a unit test. If you think a unit test is needed, you may want the sanity check of the failing test as described by me and others above. If creating a failing test adds risk, don't do it. You want to reduce risk here.
Andrew Cowenhoven
+10  A: 

There are two reasons for writing failing tests first and then making them run;

The first is to check if the test is actually testing what you write it for. You first check if it fails, you change the code to make the test run then you check if it runs. It seems stupid but I've had several occasions where I added a test for code that already ran to find out later that I had made a mistake in the test that made it always run.

The second and most important reason is to prevent you from writing too much tests. Tests reflect your design and your design reflects your requirements and requirements change. You don't want to have to rewrite lots of tests when this happens. A good rule of thumb is to have every test fail for only one reason and to have only one test fail for that reason. TDD tries to enforce this by repeating the standard red-green-refactor cycle for every test, every feature and every change in your code-base.

But of course rules are made to be broken. If you keep in mind why these rules are made in the first place you can be flexible with them. For example when you find that you have tests that test more than one thing you can split it up. Effectively you have written two new tests that you havent seen fail before. Breaking and then fixing your code to see your new tests fail is a good way to double check things.

Mendelt
I do not understand this explanation. TDD clearly calls for a 'failing test' only when starting a new feature. The red-green-refactor cycle is for incremental improvements of the code base as new features are added.
Steven A. Lowe
I don't think there's much difference between how you do incremental improvement or add a new feature. You have a code base that has to change for some reason, you write a test describing the required behaviour that fails and make it pass. Red in red-green-refactor means a failing test..
Mendelt
I agree with what you say, but i still don't understand your answer! ;-)
Steven A. Lowe
+2  A: 

There are reasons to write tests in TDD beyond just "test-first" development.

Suppose that your sort method has some other properties beyond the straight sorting action, eg: it validates that all of the inputs are integers. You don't initially rely on this and it's not in the spec, so there's no test.

Later, if you decide to exploit this additional behaviour, you need to write a test so that anyone else who comes along and refactors doesn't break this additional behaviour you now rely on.

Dan Vinton
A: 

But what if your code already accounts for the situation you want to test?

Does this break the TDD mantra of always writing failing tests?

Yes, because you've already broken the mantra of writing the test before the code. You could just delete the code and start over, or just accept the test working from the start.

James Curran
so if my initial code is robust enough to pass more than one test, i should delete it and start over? that makes no sense...
Steven A. Lowe
@Steven - precisely my point....
James Curran
If that's your point, you are not making it very well, or so it seems to me...
Ilja Preuß
+2  A: 

Simple TDD rule: You write tests that might fail.

If software engineering told us anything, it's that you cannot predict test results. Not even failing. It's in fact quite common for me to see "new feature requests" which already happen to work in existing software. This is common, because many new features are straightforward extensions of existing business desires. The underlying, orthogonal software design will still work.

I.e. New feature "List X must hold up to 10 items" instead of "up to 5 items" will require a new test case. The test will pass when the actual implementation of List X allows 2^32 items, but you don't know that for sure until you run the new test.

MSalters
+2  A: 

I doubt anyone would recommend I waste the time to implement an unstable sorting algorithm just to test the test case, then reimplement the merge-sort. How often do you come across a similar situation and what do you do?

Let me be the one recommend it then. :)

All this stuff is trade-offs between the time you spend on the one hand, and the risks you reduce or mitigate, as well as the understanding you gain, on the other hand.

Continuing the hypothetical example...

If "stableness" is an important property/feature, and you don't "test the test" by making it fail, you save the time of doing that work, but incur risk that the test is wrong and will always be green.

If on the other hand you do "test the test" by breaking the feature and watching it fail, you reduce the risk of the test.

And, the wildcard is, you might gain some important bit of knowledge. For example, while trying to code a 'bad' sort and get the test to fail, you might think more deeply about the comparison constraints on the type you're sorting, and discover that you were using "x==y" as the equivalence-class-predicate for sorting but in fact "!(x<y) && !(y<x)" is the better predicate for your system (e.g. you might uncover a bug or design flaw).

So I say err on the side of 'spend the extra time to make it fail, even if that means intentionally breaking the system just to get a red dot on the screen for a moment', because while each of these little "diversions" incurs some time cost, every once in a while one will save you a huge bundle (e.g. oops, a bug in the test means that I was never testing the most important property of my system, or oops, our whole design for inequality predicates is messed up). It's like playing the lottery, except the odds are in your favor in the long run; every week you spend $5 on tickets and usually you lose but once every three months you win a $1000 jackpot.

Brian
+1  A: 

The example you provided is IMO one of the proper times to write a test that passes first try. The purpose of proper tests is to document the expected behavior of the system. It's ok to write a test without changing the implementation to further clarify what the expected behavior is.

P.S.

As I understand it, here's the reason you want the test to fail before making it pass:

The reason you "write a test that you know will fail, but test it before making it pass" is that every once in a while, the original assumption that the test will surely fail is wrong. In those cases the test has now saved you from writing unnecessary code.

Sean Reilly
A: 

I have run into this situation many times. Whilst I recommend and try to use TDD, sometimes it breaks the flow too much to stop and write tests.

I have a two-step solution:

  1. Once you have your working code and your non-failing test, deliberately insert a change into the code to cause the test to fail.
  2. Cut that change out of your original code and put it in a comment - either in the code method or in the test method, so next time someone wants to be sure the test is still picking up failures they know what to do. This also serves as your proof of the fact that you have confirmed the test picks up failure. If it is cleanest, leave it in the code method. You might even want to take this so far as to use conditional compilation to enable test breakers.
Andy Dent
A: 

As others have said, the mantra of TDD is "no new code without a failing unit test". I have never heard any TDD practitioners say "no new tests without missing code". New tests are always welcome, even if they "happen" to "accidentally" pass. There's no need to change your code to break, then change it back to pass the test.

Ross