views:

25

answers:

1

This is a rather broad question, I'm looking for any document that contains an estimate or otherwise tries to formulate an answer. Published research would be awesome. When I say false negative, I mean a test result which is logged as a failure, but wasn't actually due to code-defect in the Application-under-test.

For context: We've relied on integration-level test automation for a while, and we've always had a certain number of false negatives in our results. Management seems to think that the number of false negatives can and should be zero. I'm trying to determine if this is a realistice expectation.

Any industry-studies, or any other information on this subject would be greatly appreciated!

+1  A: 

It's totally impossible to say that in general, for all software, the rate of false negatives is often X%. It varies based on the test being run. Tests can be applied to network transactions, internal application logic, database structure, hardware verification, etc., etc., etc., and all of these have wildly different testing characteristics.

If you specify more information about the particular test that's occasionally reporting incorrect results, then we might be able to help out a little. Otherwise, you're on your own.

Reinderien
I agree with Reinderien, and add the following. There are also different types of false negatives, some of which you control, some of which you cannot. There are false negatives due to synchronization issues, data issues, UI changes. It all really depends on what specific types of issues you are talking about. You likely cannot get rid of them entirely, but you ought to be able to mitigate them.
Tom E