views:

423

answers:

13

I recently spent about 70% of the time coding a feature writing integration tests. At one point, I was thinking “Damn, all this hard work testing it, I know I don’t have bugs here, why do I work so hard on this? Let’s just skim on the tests and finish it already…”

Five minutes later a test fails. Detailed inspection shows it’s an important, unknown bug in a 3rd party library we’re using.

So … where do you draw your line on what to test on what to take on faith? Do you test everything, or the code where you expect most of the bugs?

A: 

I test Everything. I hate it, but it's an important part of my work.

Jonathan
I can't figure out if you meant to write 'I used to test' in the past tense, or 'I test Everything" in the present tense.
ripper234
Sorry for my english. I wanted to say "I test Everything", but as Lars A. Brekken also said here, Its very important to Prioritize.
Jonathan
+11  A: 

In my opinion, it's important to be pragmatic when it comes to testing. Prioritize your testing efforts on the things that are most likely to fail, and/or the things that it is most important that do not fail (i.e. take probability and consequence into consideration).

Think, instead of blindly following one metric such as code coverage.

Stop when you are comfortable with the test suite and your code. Go back and add more tests when (if?) things start failing.

Lars A. Brekken
+1  A: 

If you or your team has been tracking metrics, you could see how many bugs are found for every test as the software life-cycle progresses. If you've defined an acceptable threshold where the time spent testing does not justify the number of bugs found, then THAT is the point at which you should stop.

You will probably never find 100% of your bugs.

AlbertoPL
You should change "probably" to "never".
Lieven
You can NEVER say that a piece of software is defect free. Absence of evidence is not evidence of absence.
StuperUser
public static void main(String[] args) { System.out.print("Hello world!");}It's for dumb things like these simple programs that I keep the probably in there. Never is still included.
AlbertoPL
A: 

I spend a lot of time on unit tests, but very little on integration tests. Unit tests allow me to build out a feature in a structure way. And now you have some nice documentation and regression tests that can be run every build

Integration tests are a different matter. They are difficult to maintain and by definition integrate a lot of different pieces of functionality, often with infrastructure that is difficult to work with.

Jim
A: 

It's never enough.

usoban
+3  A: 

Good question!

Firstly - it sounds like your extensive integration testing paid off :)

From my personal experience:

  • If its a "green fields" new project, I like to enforce strict unit testing and have a thorough (as thorough as possible) integration test plan designed.
  • If its an existing piece of software that has poor test coverage, then I prefer to design a set integration tests that test specific/known functionality. I then introduce tests (unit/integration) as I progress further with the code base.

How much is enough? Tough question - I dont think that there ever can be enough!

bunn_online
Nathan Koop
Agreed -its always a fine balance. Quantifying the value of extensive test coverage to non-developer stakeholders is a challenge. Anyone got good ideas on how to do that?
bunn_online
+3  A: 

"Too much of everything is just enough."

I don't follow strict TDD practices. I try to write enough unit tests to cover all code paths and exercise any edge cases I think are important. Basically I try to anticipate what might go wrong. I also try to match the amount of test code I write to how brittle or important I think the code under test is.

I am strict in one area: if a bug is found, I first write a test that exercises the bug and fails, make the code changes, and verify that the test passes.

Jamie Ide
+1 on writing a test before fixing a bug.
ripper234
A: 

As with everything in life it is limited by time and resources and relative to its importance. Ideally you would test everything that you reasonably think could break. Of course you can be wrong in your estimate, but overtesting to ensure that your assumptions are right depends on how significant a bug would be vs. the need to move on to the next feature/release/project.

Note: My answer primarily address integration testing. TDD is very different. It was covered on SO before, and there you stop testing when you have no more functionality to add. TDD is about design, not bug discovery.

Yishai
+3  A: 

When you're no longer afraid to make medium to major changes in your code, then chances are you've got enough tests.

Joachim Sauer
A: 

I worked in QA for 1.5 years before becoming a developer.

You can never test everything (I was told when trained all the permutations of a single text box would take longer than the known universe).

As a developer it's not your responsibility to know or state the priorities of what is important to test and what not to test. Testing and quality of the final product is a responsibility, but only the client can meaningfully state the priorities of features, unless they have explicitly given this responsibility to you. If there isn't a QA team and you don't know, ask the project manager to find out and prioritise.

Testing is a risk reduction exercise and the client/user will know what is important and what isn't. Using a test first driven development from Extreme Programming will be helpful, so you have a good test base and can regression test after a change.

It's important to note that due to natural selection code can become "immune" to tests. Code Complete says when fixing a defect to write a test case for it and look for similar defects, it's also a good idea to write a test case for defects similar to it.

StuperUser
Saying it's not your responsibility just because you're not in QA is escaping. Where I work, we are all Feature Owners - we are responsible to drive our features from the spec to design to implementation through testing and finally deployment.You can use other people's help, but it is your responsibility!
ripper234
I don't think I've been clear enough with what I meant. It is certainly a developer's responsibility to ensure quality in their work and the product being built, what is not their responsibility is to make a call on the priorities of features if they have not been given.
StuperUser
A: 

I prefer to unit test as much as possible. One of the greatest side-effects (other than increasing the quality of your code and helping keep some bugs away) is that, in my opinion, high unit test expectations require one to change the way they write code for the better. At least, that's how it worked out for me.

My classes are more cohesive, easier to read, and much more flexible because they're designed to be functional and testable.

That said, I default unit test coverage requirements of 90% (line and branch) using junit and cobertura (for Java). When I feel that these requirements cannot be met due to the nature of a specific class (or bugs in cobertura) then I make exceptions.

Unit tests start with coverage, and really work for you when you've used them to test boundary conditions realistically. For advice on how to implement that goal, the other answers all have it right.

Steve Reed
A: 

This article gives some very interesting insights on the effectiveness of user testing with different numbers of users. It suggests that you can find about two thirds of your errors with only three users testing the application, and as much as 85% of your errors with just five users.

Unit testing is harder to put a discrete value on. One suggestion to keep in mind is that unit testing can help to organize your thoughts on how to develop the code you're testing. Once you've written the requirements for a piece of code and have a way to check it reliably, you can write it more quickly and reliably.

Dean Putney
+3  A: 

Gerald Weinberg's classic book "The Psychology of Computer Programming" has lots of good stories about testing. One I especially like is in Chapter 4 "Programming as a Social Activity" "Bill" asks a co-worker to review his code and they find seventeen bugs in only thirteen statements. Code reviews provide additional eyes to help find bugs, the more eyes you use the better chance you have of finding ever-so-subtle bugs. Like Linus said, "Given enough eyeballs, all bugs are shallow" your tests are basically robotic eyes who will look over your code as many times as you want at any hour of day or night and let you know if everything is still kosher.

How many tests are enough does depend on whether you are developing from scratch or maintaining an existing system.

When starting from scratch, you don't want to spend all your time writing test and end up failing to deliver because the 10% of the features you were able to code are exhaustively tested. There will be some amount of prioritization to do. One example is private methods. Since private methods must be used by the code which is visible in some form (public/package/protected) private methods can be considered to be covered under the tests for the more-visible methods. This is where you need to include some white-box tests if there are some important or obscure behaviors or edge cases in the private code.

Tests should help you make sure you 1) understand the requirements, 2) adhere to good design practices by coding for testability, and 3) know when previously existing code stops working. If you can't describe a test for some feature, I would be willing to bet that you don't understand the feature well enough to code it cleanly. Using unit test code forces you to do things like pass in as arguments those important things like database connections or instance factories instead of giving in to the temptation of letting the class do way too much by itself and turning into a 'God' object. Letting your code be your canary means that you are free to write more code. When a previously passing test fails it means one of two things, either the code no longer does what was expected or that the requirements for the feature have changed and the test simply needs to be updated to fit the new requirements.

When working with existing code, you should be able to show that all the known scenarios are covered so that when the next change request or bug fix comes along, you will be free to dig into whatever module you see fit without the nagging worry, "what if I break something" which leads to spending more time testing even small fixes then it took to actually change the code.

So, we can't give you a hard and fast number of tests but you should shoot for a level of coverage which increases your confidence in your ability to keep making changes or adding features, otherwise you've probably reached the point of diminished returns.

Kelly French