tags:

views:

388

answers:

13

When doing TDD, how to tell "that's enough tests for this class / feature"?

I.e. when could you tell that you completed testing all edge cases?

+2  A: 

Well, when you can't think of any more failure cases that doesn't work as intended.

Part of TDD is to keep a list of things you want to implement, and problems with your current implementation... so when that list runs out, you are essentially done....

And remember, you can always go back and add tests when you discover bugs or new issues with the implementation.

Mike Stone
A: 

You could always use a test coverage tool like EMMA (http://emma.sourceforge.net/) or its Eclipse plugin EclEmma (http://www.eclemma.org/) or the like. Some developers believe that 100% test coverage is a worthy goal; others disagree.

Jim Kiley
+2  A: 

that common sense, there no perfect answer. TDD goal is to remove fear, if you feel confident you tested it well enough go on...

Just don't forget that if you find a bug later on, write a test first to reproduce the bug, then correct it, so you will prevent future change to break it again!

Some people complain when they don't have X percent of coverage.... some test are useless, and 100% coverage does not mean you test everything that can make your code break, only the fact it wont break for the way you used it!

pmlarocque
+11  A: 

With Test Driven Development, you’ll write a test before you write the code it tests. Once you’re written the code and the test passes, then it’s time to write another test. If you follow TDD correctly, you’ve written enough tests once you’re code does all that is required.

As for edge cases, let's take an example such as validating a parameter in a method. Before you add the parameter to you code, you create tests which verify the code will handle each case correctly. Then you can add the parameter and associated logic, and ensure the tests pass. If you think up more edge cases, then more tests can be added.

By taking it one step at a time, you won't have to worry about edge cases when you've finished writing your code, because you'll have already written the tests for them all. Of course, there's always human error, and you may miss something... When that situation occurs, it's time to add another test and then fix the code.

Josh Sklare
A: 

Just try to come up with every way within reason that you could cause something to fail. Null values, values out of range, etc. Once you can't easily come up with anything, just continue on to something else.

If down the road you ever find a new bug or come up with a way, add the test.

It is not about code coverage. That is a dangerous metric, because code is "covered" long before it is "tested well".

Brendan Enrick
+2  A: 

On some level, it's a gut feeling of

"Am I confident that the tests will catch all the problems I can think of now?"

On another level, you've already got a set of user or system requirements that must be met, so you could stop there.

While I do use code coverage to tell me if I didn't follow my TDD process and to find code that can be removed, I would not count code coverage as a useful way to know when to stop. Your code coverage could be 100%, but if you forgot to include a requirement, well, then you're not really done, are you.

Perhaps a misconception about TDD is that you have to know everything up front to test. This is misguided because the tests that result from the TDD process are like a breadcrumb trail. You know what has been tested in the past, and can guide you to an extent, but it won't tell you what to do next.

I think TDD could be thought of as an evolutionary process. That is, you start with your initial design and it's set of tests. As your code gets battered in production, you add more tests, and code that makes those tests pass. Each time you add a test here, and a test there, you're also doing TDD, and it doesn't cost all that much. You didn't know those cases existed when you wrote your first set of tests, but you gained the knowledge now, and can check for those problems at the touch of a button. This is the great power of TDD, and one reason why I advocate for it so much.

casademora
+1  A: 

maybe i missed something somewhere in the Agile/XP world, but my understanding of the process was that the developer and the customer specify the tests as part of the Feature. This allows the test cases to substitute for more formal requirements documentation, helps identify the use-cases for the feature, etc. So you're done testing and coding when all of these tests pass...plus any more edge cases that you think of along the way

Steven A. Lowe
+1  A: 

Alberto Savoia says that "if all your tests pass, chances are that your test are not good enough". I think that it is a good way to think about tests: ask if you are doing edge cases, pass some unexpected parameter and so on. A good way to improve the quality of your tests is work with a pair - specially a tester - and get help about more test cases. Pair with testers is good because they have a different point of view.

Of course, you could use some tool to do mutation tests and get more confidence from your tests. I have used Jester and it improve both my tests and the way that I wrote them. Consider to use something like it.

Kind Regards

marcospereira
+1  A: 

Theoretically you should cover all possible input combinations and test that the output is correct but sometimes it's just not worth it.

Andrei Rinea
A: 

Many of the other comments have hit the nail on the head. Do you feel confident about the code you have written given your test coverage? As your code evolves do your tests still adequately cover it? Do your tests capture the intended behaviour and functionality for the component under test?

There must be a happy medium. As you add more and more test cases your tests may become brittle as what is considered an edge case continuously changes. Following many of the earlier suggestions it can be very helpful to get everything you can think of up front and then adding new tests as the software grows. This kind of organic grow can help your tests grow without all the effort up front.

I am not going to lie but I often get lazy when going back to write additional tests. I might miss that property that contains 0 code or the default constructor that I do not care about. Sometimes not being completely anal about the process can save you time n areas that are less then critical (the 100% code coverage myth).

You have to remember that the end goal is to get a top notch product out the door and not kill yourself testing. If you have that gut feeling like you are missing something then chances are you are have and that you need to add more tests.

Good luck and happy coding.

smaclell
+2  A: 

A test is a way of precisely describing something you want. Adding a test expands the scope of what you want, or adds details of what you want.

If you can't think of anything more that you want, or any refinements to what you want, then move on to something else. You can always come back later.

jml
+1  A: 

Tests in TDD are about covering the specification, in fact they can be a substitute for a specification. In TDD, tests are not about covering the code. They ensure the code covers the specification, because the code will fail a test if it doesn't cover the specification. Any extra code you have doesn't matter.

So you have enough tests when the tests look like they describe all the expectations that you or the stakeholders have.

Cade Roux
+3  A: 

Kent Beck's advice is to write tests until fear turns into boredom. That is, until you're no longer afraid that anything will break, assuming you start with an appropriate level of fear.

John D. Cook