views:

529

answers:

16

I've only done minor unit testing at various points in my career. Whenever I start diving into it again, it always troubles me how to prove that my tests are correct. How can I tell that there isn't a bug in my unit test? Usually I end up running the app, proving it works, then using the unit test as a sort of regression test. What is the recommended approach and/or what is the approach you take to this problem?

Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing.

Edit2: For the arguments "unit testing is for making sure your changes don't break anything" and "this will only happen if the test has the exact same flaw as the code", what if the test overfits? It's possible to pass both good and bad code with a bad test. My main question is what good is unit testing since if your tests can be flawed you can't really improve your confidence in your code, can't really prove your refactoring worked, and can't really prove that you met the specification?

+2  A: 

I guess writing the test first (before writing the code) is a pretty good way of being sure your test is valid.

Or you could write tests for your unit tests... :P

teedyay
+1  A: 

You don't tell. Generally, the tests will be simpler than the code they're testing, so the idea is simply that they'll be less likely to have bugs than the real code will.

mquander
Once again, why even write unit test then? Why not just write simple nuggets of code and claim, "the methods are simple, so they have a small chance of having bugs"
Jacob Adams
That's what many people try to do. However, unless you're a very talented programmer, it's often very difficult to write code that simple and transparent.
mquander
A: 

As above, the best way is to write the test before the actual code. Find real life examples of the code your testing also if applicable (mathematical formula or similar), and compare the unit test and expected output to that.

Dave
+5  A: 

Well, Dijkstra famously said:

"Testing shows the presence, not the absence of bugs"

IOW, how would you write a unit test for the function add(int, int)?

IOW, it's a tough one.

Assaf Lavie
Wrong example, not so tough. There are INT_MIN, INT_MAX, -1, 0, and 1. Test all permutations. ;-)
DevSolar
@DevSolar: with your inputs how do you make sure that add(42,13) gives the correct result? Assaf did not provide source code of his function and there may be a specific case with the value 42.
mouviciel
It's called domain testing. To quote from http://www.testingeducation.org/BBST/Domain.html : The basic notion is that you take the huge space of possible tests of an individual variable and subdivide it into subsets that are (in some way) equivalent. Then you test a representative from each subset.
DevSolar
(continued) So you take the largest possible negative, the smallest possible negative, zero... so on. Any errors in add() are most likely to show up at these "domain borders". A specific case with value 42 would basically require some "magic numbers" in the code, and we don't use them, do we? ;-)
DevSolar
Yes, I agree that assumptions made by equivalence testing ("we don't use magic numbers") is correct almost always. This is why it's very effective and useful. But think of a div(float,float) running on a CPU with a flawed FDIV. If you don't know that CPU is bugged, your domains will not be enough.
mouviciel
It's not a perfect world. ;-)
DevSolar
That's why we don't unit test hardware; we prove it correct.
Stefan Kendall
+2  A: 

For this to be a problem your code would have to be buggy in a way that coincidentally causes your tests to pass. This happened to me recently, where I was checking that a given condition (a) caused a method to fail. The test passed (i.e. the method failed), but it passed because another condition (b) caused a failure. Write your tests carefully, and make sure that unit tests test ONE thing.

Generally though, tests cannot be written to prove code is bug free. They're a step in the right direction.

Dominic Rodger
Note my comment about just arguing "write simple tests"I'm not saying tests can prove the code is bug free. However, how can they even increase my confidence in my code if they tests themselves could be faulty?
Jacob Adams
They cannot give you complete confidence that your code is bug-free. They can increase your confidence that your code is bug-free though, as a matter of probability. Now, for your code to have bugs, your tests have to exhibit the same buggy behaviour as your code.
Dominic Rodger
+1  A: 

This is something that bugs everyone that uses unit tests. If I would have to give you a short answer I 'd tell you to always trust your unit tests. But I would say that this should be backed up with your previous experience:

  • Did you have any defects that were reported from manual testing and the unit test didn't catch (although it was responsible to) because there was a bug in your test?
  • Did you have false negatives in the past?
  • Are your unit tests simple enough?
  • Do you write them before new code or at least in parallel?
Yorgos Pagles
+12  A: 

The unit test should express the "contract" of whatever you are testing. It's more or less the specification of the unit put into code. As such, given the specs, it should be more or less obvious whether the unit tests are "correct".

But I would not worry too much about the "correctness" of the unit tests. They are part of the software, and as such, they could well be incorrect as well. The point of unit tests - from my POV - is that they ensure the "contract" of your software is not broken by accident. That is what makes unit tests so valuable: You can dig around in the software, refactor some parts, change the algorithms in others, and your unit tests will tell you if you broke anything. Even incorrect unit tests will tell you that.

If there is a bug in your unit tests, you will find out - because the unit test fails while the tested code turns out to be correct. Well then, fix the unit test. No big deal.

DevSolar
+1 for "test is specification" argument.
mouviciel
+2  A: 
  1. The complexity of the unit test code is (or should be) less (often orders of magnitude less) than the real code
  2. The chance of your coding a bug in your unit test that exactly matches a bug in your real code is much less than just coding the bug in your real code (if you code a bug in your unit test that doesn't match a bug in your real code it should fail). Of course if you have made incorrect assumptions in your real code you are likely to make the same assumption again - although the mind set of unit testing should still reduce even that case
  3. As already alluded to, when you write a unit test you have (or should have) a different mind set. When writing real code you're thinking "how do I solve this problem". When writing a unit test you're thinking, "how do I test every possibly way this could break"

As others have already said, it's not about whether you can prove that the unit tests are correct and complete (although that's almost certainly much easier with test code), as it is reducing the bug count to a very low number - and pushing it lower and lower.

Of course there has to come a point where your confident in your unit tests enough to rely on them - for example when doing refactorings. Reaching this point is usually just a case of experience and intuition (although there are code coverage tools that help).

Phil Nash
+1  A: 

You can't prove tests are correct, and if you're trying to, you're Doing It Wrong.

Unit tests are a first screen - a smoke test - like all automated testing. They are primarily there to tell you if a change you make later on breaks stuff. They are not designed to be a proof of quality, even at 100% coverage.

The metric does make management feel better, though, and that is useful in itself sometimes!

Sarah Mei
+4  A: 

There are two ways to help ensure the correctness of your unit tests:

  • TDD: Write the test first, then write the code it's meant to test. That means you get to see them fail. If you know that it detects at least some classes of bugs (such as "I haven't implemented any functionality in the function I want to test yet"), then you know that it's not completely useless. It may still let some other bugs slip past, but we know that the test is not completely incorrect.
  • Have lots of tests. If one test lets some bugs slip past, they'll most likely cause errors further down the line, causing other tests to fail. As you notice that, and fix the offending code, you get a chance to examine why the first test didn't catch the error as expected.

And finally, of course, keep the unit tests so simple that they're unlikely to contain bugs.

jalf
+2  A: 

First let me start by saying that unit testing is NOT only about testing. It is more about the design of the application. To see this in action you should put a camera with your display and record your coding while writing unit testing. You will realize that you are making a lot of design decisions when writing unit tests.

How to know if my unit tests are good?

You cannot test the logical part period! If your code is saying that 2+2 = 5 and your test is making sure that 2+2 = 5 then for you 2+2 is 5. To write good unit tests you MUST have good understanding of the domain you are working with. When you know what you are trying to accomplish you will write good tests and good code to accomplish it. If you have many unit tests and your assumptions are wrong then sooner or later you will find out your mistakes.

azamsharp
A: 

This is one of the advantages of TDD: the code acts as a test for the tests.

It is possible that you'll make equivalent errors, but it is uncommon in my experience.

But I have certainly had the case where I write a test that should fail only to have it pass, which told me my test was wrong.

When I was first learning unit testing, and before I was doing TDD, I would also deliberately break the code after writing the test to ensure that it failed as I expected. When I didn't I knew the test was broken.

I really like Bob Martin's description of this as being equivalent to double entry bookkeeping.

Jeffrey Fredrick
A: 

Code review?

TraumaPony
+2  A: 

I had this same question, and having read the comments, here's what I now think (with due credit to the previous answers):

I think the problem may be that we both took the ostensible purpose of unit tests -- to prove the code is correct -- and applied that purpose to the tests themselves. That's fine as far as it goes, except the purpose of unit tests is not to prove that the code is correct.

As with all nontrivial endeavors, you can never be 100% sure. The correct purpose of unit tests is to reduce bugs, not eliminate them. Most specifically, as others have noted, when you make changes later on that might accidentally break something. Unit tests are just one tool to reduce bugs, and certainly should not be the only one. Ideally you combine unit testing with code review and solid QA in order to reduce bugs to a tolerable level.

Unit tests are much simpler than your code; it's not possible to make your code as simple as a unit test if your code does anything significant. If you write "small, granular" code that's easy to prove correct, then your code will consist of a huge number of small functions, and you're still going to have to determine whether they all work correctly in concert.

Since unit tests are inevitably simpler than the code they're testing, they're less likely to have bugs. Even if some of your unit tests are buggy, overall they're still going to improve the quality of your main codebase. (If your unit tests are so buggy that this isn't true, then likely your main codebase is a steaming pile as well, and you're completely screwed. I think we're all assuming a basic level of competence.)

If you DID want to apply a second level of unit testing to prove your unit tests correct, you could do so, but it's subject to diminishing returns. To look at it faux-numerically:

Assume that unit testing reduces the number of production bugs by 50%. You then write meta-unit tests (unit tests to find bugs in the unit tests). Say that this finds problems with your unit tests, reducing the production bug rate to 40%. But it took 80% as long to write the meta-unit tests as it did to write the unit tests. For 80% of the effort you only got another 20% of the gain. Maybe writing meta-meta-unit tests gives you another 5 percentage points, but now again that took 80% of the time it took to write the meta-unit tests, so for 64% of the effort of writing the unit tests (which gave you 50%) you got another 5%. Even with substantially more liberal numbers, it's not an efficient way to spend your time.

In this scenario it's clear that going past the point of writing unit tests isn't worth the effort.

dirtside
A: 

Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing.

The idea of unit testing is to test the most granular things, then stack together tests to prove the larger case. If you're writing large tests, you lose a bit of the benefits there, although it's probably quicker to write larger tests.

Dean J
I agree that unit testing should test small individual pieces of functionality. However, I didn't like the argument that tests could be considered correct since they were small and granular and appear to be correct. If that was the case, you could just writing small granular code, check that it looks correct, and then not write unit tests.
Jacob Adams
A: 

Dominic mentioned that "For this to be a problem your code would have to be buggy in a way that coincidentally causes your tests to pass.". One technique you can use to see if this is a problem is mutation testing. It makes changes to your code, and see if it causes the unit tests to fail. If it doesn't, then it may indicate areas where the testing isn't 100% thorough.

Andrew Grimm