tags:

views:

227

answers:

7

I don't know so much about Test-Driven Development (TDD), but I always hear that i need to start the development with some test cases. Then, I need to make this tests pass with the most simple solution. And then create more tests to make my tests fail again...

But the question is: When stop creating new tests? When I know that my application is in agreement with the requirements?

+2  A: 

Code coverage tools can provide useful information about how well tested your code is. Such tools will identify code paths that have not been exercised by your tests.

Dan Dyer
If you are doing TDD, there should be very few such paths, and more paths yet to be written.
Kathy Van Stone
+2  A: 

In TDD, you stop writing tests when you stop writing code (or just so slightly before the last code is written), unless (as mentioned), your code coverage is too low.

stevedbrown
Yes, when you stop writing code you can stop testing it. Until you find that edge case you didn't think about, or something else changes. When your understanding of the problem changes, then you change the tests or add new ones (then change the code so the tests pass).
Hamish Smith
Right, new tests invariably leads to code changing.
stevedbrown
A: 

There are certain areas you may find difficult to test, such as gui and data access but apart from that you write test until you objectives are met.

John Nolan
+8  A: 

Shamelessly copying Kent Beck's answer to this question.

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.

Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.

Matthew Vines
+1 to Kent then :)
John Nolan
Yeah, I certainly don't need credit for this, but this really stuck with me, and I think it answers this question just as well.
Matthew Vines
Note that question was not about TDD, which is different.
John Saunders
That's true, but I think the same principles apply. The overall concept is to not write tests for the sake of tests, but to ensure proper functionality. Even when doing TDD. Write a test for a purpose, code the purpose, validate, and refactor. Don't write more tests to cover the same purpose unless you are worried you have missed an edge case. Good is good, but what good means will differ by developer and by team. Just give it some thought.
Matthew Vines
@John Saunders the question (which was mine) was written with tdd in mind. Although its content was agnostic of methodology.
John Nolan
+1  A: 

You stop writing tests when you have no more functionality to add to your code. There may be some additional edge cases you want to make sure are covered, but beyond that, when you don't have anything more to have your code do, you don't have any more TDD tests to write (Acceptance and QA tests are a different story).

Yishai
A: 

In an ideal world where I would follow eXtreme Programming practices, (not just TDD) my customer is supposed to provide me with some automated functional test. When such test goes green I stop writing tests and go to my customer to ask for some more functional tests that do not pass (because tests are specification and if my customer does not provide me with failing tests I won't know what to do)

I could explain it another way, aimed at a more prectical world. At XP France we organize TDD Dojo's on a regular basis (once a week). You could call taht TDD training sessions. There we use to practice TDD on some toy problems. When doing this the idea is to propose a test that fail, then write code to make it pass. Never propose a test that works without coding.

Whoever propose a test that goes green without any code should pay beers to others. So that's a way to know it's time to stop testing: when you are not able any more to write tests that fails you are finished. (Anyway coding after drinking is bad practice).

kriss
A: 
Miha Hribar