tags:

views:

367

answers:

14

I tried looking through all the pages about unit tests and could not find this question. If this is a duplicate, please let me know and I will delete it.

I was recently tasked to help implement unit testing at my company. I realized that I could unit test all the Oracle PL/SQL code, Java code, HTML, JavaScript, XML, XSLT, and more.

Is there such a thing as too much unit testing? Should I write unit tests for everything above or is that overkill?

A: 

Unit test any code that you think might change.

mgroves
Alternatively, system-test functionality whose implementation might change: the *functionality* shouldn't change, so test that; but if you're throwing away the *implementation*, why unit-test the implementation?
ChrisW
+10  A: 

This depends on the project and its tolerance for failure. There is no single answer. If you can risk a bug, then don't test everything.

When you have tons of tests, it is also likely you will have bugs in your tests. Adding to your headaches.

test what needs testing, leave what does not which often leaves the fairly simple stuff.

Aiden Bell
+1 to bugs in your tests - they're just as likely as bugs in the code.
Mark Ransom
Bugs in our tests? Perhaps we should write unit tests for our tests.
CiscoIPPhone
@CiscoIPPhone ... what about those tests? They might have bugs ;) What we need is some meta-test-testing framework that will test test test integrity.
Aiden Bell
Perhaps an AI that will read your code and then design tests against it? :-)
Ascalonian
A: 

You should only really write unit tests for any code which you have written yourself. There is no need to test the functionality inherently provided to you.

For example, If you've been given a library with an add function, you should not be testing that add(1,2) returns 3. Now if you've WRITTEN that code, then yes, you should be testing it.

Of course, whoever wrote the library may not have tested it and it may not work... in which case you should write it yourself or get a separate one with the same functionality.

Kurisu
A: 

Well, you certainly shouldn't unit test everything, but at least the complicated tasks or those that will most likely contain errors/cases you haven't thought of.

schnaader
A: 

The point of unit testing is being able to run a quick set of tests to verify that your code is correct. This lets you verify that your code matches your specification and also lets you make changes and ensure that they don't break anything.

Use your judgement. You don't want to spend all of your time writing unit tests or you won't have any time to write actual code to test.

Sam DeFabbia-Kane
+3  A: 

Kent Beck of JUnit and JUnitMax fame answered a similar question of mine. The question has slightly different semantics but the answer is definitely relevant

John Nolan
A: 

When you've unit tested your unit tests, thinking you have then provided 200% coverage.

Pyth
http://stackoverflow.com/questions/244345/how-do-you-unit-test-a-unit-test/1076159#1076159
cwash
+1  A: 

Yes, there is such a thing as too much unit testing. One example would be unit testing in a whitebox manner, such that you're effectively testing the specific implementation; such testing would effectively slow down progress and refactoring by requiring compliant code to need new unit tests (because the tests were dependent upon specific implementation details).

McWafflestix
+1  A: 

While more tests is usually better (I have yet to be on a project that actually had too many tests), there's a point at which the ROI bottoms out, and you should move on. I'm assuming you have finite time to work on this project, by the way. ;)

Adding unit tests has some amount of diminishing returns -- after a certain point (Code Complete has some theories), you're better off spending your finite amount of time on something else. That may be more testing/quality activities like refactoring and code review, usability testing with real human users, etc., or it could be spent on other things like new features, or user experience polish.

ojrac
Good answer. I think that you can take a more fine grained view on ROI for testing a specific feature or piece of code. When you look at it like this, I think you get a better grasp on where the actual cutoff is for diminishing returns w/r/t testing.
cwash
A: 

There is a development approach called test-driven development which essentially says that there is no such thing as too much (non-redundant) unit testing. That approach, however, is not a testing approach, but rather a design approach which relies on working code and a more or less complete unit test suite with tests which drive every single decision made about the codebase.

In a non-TDD situation, automated tests should exercise every line of code you write (in particular Branch coverage is good), but even then there are exceptions - you shouldn't be testing vendor-supplied platform or framework code unless you know for certain that there are bugs which will affect you in that platform. You shouldn't be testing thin wrappers (or, equally, if you need to test it, the wrapper is not thin). You should be testing all core business logic, and it is certainly helpful to have some set of tests that exercise your database at some elemental level, although those tests will never work in the common situation where unit tests are run every time you compile.

Specifically with regard to database testing is intrinsically slow, and depending on how much logic is held in your database, quite difficult to get right. Typically things like dbs, HTML/XML documents & templating, and other document-ish aspects of a program are verified moreso than tested. The difference is usually that testing tries to exercise execution paths whereas verification tries to verify inputs and outputs directly.

To learn more about this I would suggest reading up on "Code Coverage". There is a lot of material available if you're curious about this.

Mike Burton
I'm not sure that TDD explicitly says there is no such thing as too much unit testing.I think more accurately it says that any production code that is introduced should be adequately covered... But not to the extent that you keep writing more and more tests and never get around to implementing any functionality. You use writing a test first to drive out writing more functionality.
cwash
+1  A: 

I suggest that in some situations you might want automated testing, but no 'unit' testing at all (Should one test internal implementation, or only test public behaviour?), and that any time spent writing unit tests would be better spent writing system tests.

ChrisW
Not all automated tests are unit tests. Best to refer to it as "Developer Testing"
cwash
I dont't know; I'm running automated system tests, instead of unit tests ... and running them frequently/routinely, as part of daily development of new features (or, sometimes, of refactoring). The whole system test suite only takes several seconds to run; and I have my own copy of the system (including database, etc.) on which to test.
ChrisW
@ChrisW I think we're saying the same thing. Sounds like a valid developer test suite to me. I'm arguing that people refer to their automated tests (written and maintained by developers) as developer tests. Just because it runs using a unit testing framework doesn't make it a unit test. I have a blog entry that clarifies this more: http://cwash.org/2009/02/17/dont-unit-test-anymore-no-really/
cwash
ChrisW
cwash
It works for me because a) my tests run quickly (some people e.g. http://stackoverflow.com/questions/1094413/is-there-such-a-thing-as-too-much-unit-testing/1094597#1094597 say that unit tests can't be replaced by the system tests because the system tests take too long to run), and b) it's no more expensive for me to fix bugs during integration than it would be to fix bugs when coding and testing 'units' (because I'm the only person involved in the integration).
ChrisW
+4  A: 

Unit testing code vs. leaving code uncovered by tests both have a cost.

The cost of excluding code form unit testing may include (but aren't limited to):

  • Increased development time due to fixing issues you can't automatically test
  • Fixing problems discovered during QA testing
  • Fixing problems discovered when the code reaches your customers
  • Loss of revenue due to customer dissatisfaction with defects that made it through testing

The costs of writing a unit test include (but aren't limited to):

  • Writing the original unit test
  • Maintaining the unit test as your system evolves
  • Refining the unit test to cover more conditions as you discover them in testing or production
  • Refactoring unit tests as the underlying code under test is refactored
  • Lost revenue when it takes longer for you application to reach enter the market
  • The opportunity cost of implementing features that could drive sales

You have to make your best judgement about what these costs are likely to be, and what your tolerance is for absorbing such costs.

In general, unit testing costs are mostly absorbed during the development phase of a system - and somewhat during it's maintenance. If you spend too much time writing unit tests you may miss a valuable window of opportunity to get your product to market. This could cost you sales or even long-term revenue if you operate in a competitive industry.

The cost of defects is absorbed during the entire lifetime of your system in production - up until the point the defect is corrected. And potentially, even beyond that, if they defect is significant enough that it affects your company's reputation or market position.

So, to answer your question, is there such as thing as too much unit testing ... sure, the problem is finding the right balance between enough unit testing to cover the important areas of functionality, and focusing effort on creating new value for your customers in the terms of system functionality.

LBushkin
Good answer. I might add that most systems (or modules) colloquially spend much more time in maintenance than initial development. If you take this approach I think it also makes sense to weigh in not only the costs, but also the benefits of testing and benefits of not testing. This will give you a bit more confidence in your decision.
cwash
+2  A: 

The purpose of Unit tests is generally to make it possibly to refector or change with greater assurance that you did not break anything. If a change is scary because you do not know if you will break anything, you probably need to add a test. If a change is tedious because it will break a lot of tests, you probably have too many test (or too fragile a test).

The most obvious case is the UI. What makes a UI look good is something that is hard to test, and using a master example tends to be fragile. So the layer of the UI involving the look of something tends not to be tested.

The other times it might not be worth it is if the test is very hard to write and the safety it gives is minimal.

For HTML I tended to check that the data I wanted was there (using XPath queries), but did not test the entire HTML. Similarly for XSLT and XML. In JavaScript, when I could I tested libraries but left the main page alone (except that I moved most code into libraries). If the JavaScript is particularly complicated I would test more. For databases I would look into testing stored procedures and possibly views; the rest is more declarative.

However, in your case first start with the stuff that worries you the most or is about to change, especially if it is not too difficult to test. Check the book Working Effectively with Legacy Code for more help.

Kathy Van Stone
Why is unit-testing better for this purpose than higher-level testing (i.e. integration and/or system testing)?
ChrisW
Unit testing is good for checking a lot of paths without as much combinatoral blowup as you get from higher-level testing. Higher-level testing, however, is necessary as well to check that the system works together and with systems such as databases.
Kathy Van Stone
I suggest that system tests don't vary when the implementation does, and therefore system tests are better for making it possible to refactor with greater assurance. The two chief benefits of unit tests, IMO, are: a) less debugging needed during integration testing, which is useful if and only if debugging during integration testing is relatively difficult or expensive; b) ability to test before integration, e.g. if the other components haven't been written yet.
ChrisW
If systems tests take too long to write or run they won't help with refactoring. In practice I rarely had to change unit tests that much with refactoring (outside of changes that the refactoring tools do automatically). I do take care not to tie the test too much to the implementation of a class a opposed to its behavior.
Kathy Van Stone
+1 for the MF reference.
cwash
+1  A: 

As EJD said, you can't verify the absence of errors.

This means there are always more tests you could write. Any of these could be useful.

What you need to understand is that unit-testing (and other types of automated testing you use for development purposes) can help with development, but should never be viewed as a replacement for formal QA.

Some tests are much more valuable than others.

There are parts of your code that change a lot more frequently, are more prone to break, etc. These are the most economical tests.

You need to balance out the amount of testing you agree to take on as a developer. You can easily overburden yourself with unmaintainable tests. IMO, unmaintainable tests are worse than no tests because they:

  1. Turn others off from trying to maintain a test suite or write new tests.
  2. Detract from you adding new, meaningful functionality. If automated testing is not a net-positive result, you should ditch it like other engineering practices.

What should I test?

Test the "Happy Path" - this ensures that you get interactions right, and that things are wired together properly. But you don't adequately test a bridge by driving down it on a sunny day with no traffic.

Pragmatic Unit Testing recommends you use Right-BICEP to figure out what to test. "Right" for the happy path, then Boundary conditions, check any Inverse relationships, use another method (if it exists) to Cross-check results, force Error conditions, and finally take into account any Performance considerations that should be verified. I'd say if you are thinking about tests to write in this way, you're most likely figure out how to get to an adequate level of testing. You'll be able to figure out which ones are more useful and when. See the book for much more info.

Test at the right level

As others have mentioned, unit tests are not the only way to write automated tests. Other types of frameworks may be built off of unit tests, but provide mechanisms to do package level, system or integration tests. The best bang for the buck may be at a higher level, and just using unit testing to verify a single component's happy path.

Don't be discouraged

I'm painting a more grim picture here than I expect most developers will find in reality. The bottom line is that you make a commitment to learn how to write tests and write them well. But don't let fear of the unknown scare you into not writing any tests. Unlike production code, tests can be ditched and rewritten without many adverse effects.

cwash