views:

287

answers:

5

I am starting, and loving, TDD, however wondering about the red green light concept. I do understand in theory the importance of ensuring you can fail a test before passing it. In practice, however, I am finding it somewhat a futile practice.

I feel I can't properly write a failing or passing test without implementing the code I am intending to test. For example, if I write a test to show my DataProvider is returning a DataRow, I need to write the DAL logical to give a meaningful fail, a fail that is more than a NullException or a Null return from an empty method, something that seems meaningless, as I feel a red light should show that I can create a failed test from the actual logical that I am testing.

In other words, if I just return null or false, from a function I am testing to get my fail what is really the value of the red light.

However if I have already implemented the logical (which in a way goes against the Test first paradigm), I find I am simply testing mutually exclusive concepts (IsTrue instead of IsFalse, or IsNull instead of IsNotNull) just for the sake of getting a Red light instead of a Green, and then switching them to the opposite to get the Pass.

I am not having a go at the concept, I am really posing this question as it is something I have noticed and am wondering if I am doing something wrong.

EDIT

I accepted Charlie Martin's answer, as it worked best for me, it is in no way suggesting that there was no validity in the other answers, all of which helped me understand a concept I was apparently not grokking properly

+7  A: 

The value of red right lies in its ability to spot false-positives. It has happened with me that no matter what my implementation code was it was always passing the tests. It is exactly in these kind of situations that the red light/green light testing helps.

It has also happened with me that some of my tests were not being run at all and all I was seeing was 'Build Succeeded' when I wasn't using the red light. If I was using the red light to make sure that my tests were failing I would be suspicious the minute I see the build succeeding when I would be expecting the build to fail.

Raminder
Exactly. Especially with larger test suites it happens more often than not that you simply forget to add a test case. The red light showing up with a noop impl ensures that your test is actually being run.
Ole
+1  A: 

There are a couple of motivating examples that I can think of why red-light is useful and has helped me tremendously.

  1. Writing a red test for sake of sanity. I'm sure that the test works to validate some feature that I know of isn't implemented yet REALLY REALLY REALLY isn't.

  2. When you find a bug in your code, you write a failing test that points out this bug. Having a red test from start you're pretty sure you've got it and know when the bug is fixed.

There is probably once example where red-light isn't useful and that's when you're writing tests to pad on functionality that works, they're usually green from start. I would warn you about writing green tests though, it may happen that you have to redesign classes and whatnot substancially, which makes some tests obsolete. All that green-test writing work for nothing!

Spoike
You could always write a tests that expects the extra functionality before adding it. Even if it is a simple compile failure it's at least a red light.
tvanfosson
+1  A: 

I'm not sure if I am getting your point but here's how I see the matter.

Think less about what the function returns and more about what it does and what it assumes to be true.

If my true/false function is some language version of the following C function:

bool isIntPrime( int testInt )

then you want to ensure a test fails if you pass it a double (rather than have a 'helpful' implicit cast occur as you may encounter in some languages).

If you really can't find a 'red light' case then your 'green light' is largely without meaning. If you really encounter such a case then testing is probably not going to be worth much so testing that function / feature is somewhat a waste of time. Perhaps it is so simple and robust it, effectively, can't fail? Then writing a bunch of "green light" tests is a waste of your time.

It's kinda like the white rabbit thought experiment. If I posit that all rabbits are brown then counting brown rabbits achieves nothing towards establishing the veracity of my claim. However, the first white rabbit I see proves my claim false.

Does that trivial example help at all?

duncan
+1  A: 

I always start out with my code throwing a NotImplementedException, although some people would claim that you should start by not implementing the method and having the failed compile be your first failed test. There is some logic to this in that if you can write a test without using the method (and it passes) then you don't need to write any code. I usually do this in my head, though.

Having written the exception-throwing code, I proceed to write the first test for the feature I am working on and get the first Red light (presumably). Now I'm able to proceed with the regular rhythm of TDD -- Red-Green-Refactor. Don't forget that last step. Refactor when your tests pass -- not while writing code to correct a failing test.

This approach takes discipline AND sometimes it seems like you are doing stupid stuff, since usually the simplest thing to do to pass the first test is to return some hard-coded data. Persevere, though; this bootstrapping phase is relatively short and if you don't skip it you may find that you write simpler, more maintainable code than you otherwise would by having your solution (or at least the skeleton) magically spring into being whole on the first test. If you're not developing in small increments, you're not doing TDD.

Obligatory disclaimer: don't forget that TDD is about unit testing. There are other kinds of testing (integration, acceptance, load, ...) and the need for other types of testing don't magically disappear when you start doing TDD.

tvanfosson
+3  A: 

Think of it as a kind of specification. You start by asking yourself, "what should the code I eventually want be able to do?" So, say you want to write a function that adds natural numbers. How would you know if that worked? Well, you know that 2+2=4, so you can write a test (this is basically python but leves out a lot of details, see the unittest module docs):

def test2plus2(self):
    assertEquals(addNat(2,2), 4)

So here you've defined a specification that says "For natural numbers a and b, compute a+b". Now you know what you need to write the function:

def addNat(a,b):
    return a+b

You run it and it passes the test. But then there are some other things you know; since it's for natural numbers only (for whatever reason), you need to add a guard against unnatural numbers:

def testUnnatural(self):
    failUnlessRaises(AssertionErrof, addNat(-1, 2))

Now you've added a specification that says "and throw an AssertionError if the numbers are negative." that tells you the net piece of code:

def addNat(a,b):
    """Version two"""
    assert((a >= 0) and (b>=0))
    return a+b

Run this now, and the assertion doesn't fire; success again.

The point is that TDD is a way of defining very clear detailed specifications. For something like "addNat" they're not needed, but real code, especially in an agile world, you don't know theanswer intuitively. TDD hlps you sort that and fid the real requirements,

Charlie Martin