tags:

views:

459

answers:

4

Hi all,

I searched SO, finding little thing about negative testing, which is also a very important thing developers should keep in mind during work. What about making a list of top 10 test case for the negative testing developer should keep in mind collaboratively?

Thanks!

The definition of Negative Testing:

In software testing, a test designed to determine the response of the system outside of what is defined. It is designed to determine if the system doesn't crash with unexpected input.

+4  A: 

Negative Testing

passing invalid data to a test

see http://www.pragmaticsw.com/Newsletters/newsletter_2007_09_SP.htm for one example of a top-ten list

while testing boundaries et al is good, in TDD it would be better to explicitly test for expected exception conditions rather than randomly testing negatives that may or may not be relevant

so a top-ten list would either be very generic, make a bunch of assumptions, or be too specific to be generally useful ;-)

Steven A. Lowe
OK, well I'm glad someone knew what it was. :)
Robert Harvey
@[Robert Harvey]: google knows!
Steven A. Lowe
Sure, but the matches in Google that I saw were, shall we say, less than authoritative. I wasn't going to guess, and I'd never heard the term in common use before, although Scott Hanselman did use the term once in his "Art of Unit Testing" podcast.
Robert Harvey
A: 

Yes, he's talking about writing tests such that you ensure code not only does what it means to do, but doesn't do more.

So imagine a test to check if a file is deleted; you could delete the entire folder and it would return true.

This form of testing is arguably interesting, but potentially of dubious value.

For example if you want to make sure an 'update' statement is affecting your given row, you should also have previously confirmed that it only affects one row.

I guess what my boring mumbling is suggesting, is that this should probably be covered in a your normal tests. Maybe.

Interesting to think about anyway.

Noon Silk
+1  A: 

In Scott Hanselman's podcast, "The Art of Unit Testing with Roy Osherove," Scott says that he tends to write positive tests when he is building the application using TDD, and negative tests after the code is written to improve code coverage.

Roy Osherove says that negative tests in the beginning do not add value; you can't go to your boss and say, "Hey look, here's all of the things that the code will NOT do!"

As to a list of possible negative tests, I think that list is unbounded, i.e. that list is of infinite size, and I don't think any negative condition is any better or worse than any other negative condition.

Robert Harvey
+3  A: 

Tools like Pex may be useful here; it is designed to try to find values / scenarios that crash the code (by exercising every code branch and likely error-case like div-by-zero/overflow/etc), based on static analysis of what it does. It has successfully found some edge-cases in code like the .NET "resx" reader.

Marc Gravell
Ah, yes, if looking for tools like this search for 'fuzzers'.
Noon Silk
just tried Pex. and its awesome.
Umair Ahmed