views:

858

answers:

18

I'm not brand new to the concept of unit testing but at the same time I've not yet mastered them either.

The one question that has been going through my head recently as I've been writing unit tests while writing my code using the TDD methodology is: to what level should I be testing?

Sometimes I wonder if I'm being excessive in the use of unit testing.

At what point should a developer stop writing unit tests and get actual work done?

I might need to clarify that question before people assume I'm against using TDD...

What I'm struggling with is the granularity of my test....

  • When my app has a config file do I test that values can be retrieved from the file? I lean towards yes....but....
  • Do I then write a unit test for each possible config value that will be present? ie check that they exist...and can be parsed to the correct type...
  • When my app writes errors to a log do I need to test that it is able to write to the log? Do I then need to write tests to verify that entries are actually made to the log?

I want to be able to use my unit tests to verify the behavior of my app...but I'm not quite sure where to stop. Is it possible to write tests that are too trivial?

+2  A: 

yes, unit testing can be taken to excess/extremes

keep in mind that it is only necessary to test features; everything else follows from that

so no, you don't have to test that you can read values from a config file, because one (or more) of the features will need to read values from a config file - and if they don't, then you don't need a config file!

EDIT: There seems to be some confusion as to what I am trying to say. I am not saying that unit testing and feature testing are the same thing - they are not. Per wikipedia: "a unit is the smallest testable part of an application" and logically such 'units' are smaller than most 'features'.

What I am saying is that unit testing is the extreme, and is rarely necessary - with the possible exception of super-critical software (real-time control systems where lives may be endangered, for example) or projects with no limits on budget and timeline.

For most software, from a practical point of view, testing features is all that is required. Testing units smaller than features won't hurt, and it might help, but the trade-off in productivity vs improvements in quality are debatable.

Steven A. Lowe
That sounds like an integration-test, if you test something else and config-values are read. Unit-tests test every unit (classes in OOP) singled out, so that the unit and it's unit-test work without any surrounding code from the app.
Mnementh
I usually have a fake/test double of the config file in this scenario. Mocking out the config files in this case is too much work.. I feel.
Gishu
@Mnementh: that depends on what you define as your "unit". I define a feature as the only software unit worthy of the effort of testing. Not a class, not a method, a feature.
Steven A. Lowe
Unit test by definition asserts low-level functionality of a given class, not a complete user feature. Of course you are free to do it your way, just don't be surprised when there are misunderstandings.
Adam Byrtek
@[Adam Byrtek]: see clarification in edits
Steven A. Lowe
@Mnementh: see clarification in edits
Steven A. Lowe
+3  A: 

Unit tests need to test each piece of functionality, edge cases and sometimes corner cases.

If you find that after testing edge and corner cases, you're doing "middle" cases, then that's probably excessive.

Moreover, depending on your environment, unit tests might be either quite time consuming to write, or quite brittle.

Tests do require ongoing maintenance, so every test you write will potentially break in the future and need to be fixed (even though it's not detected an actual bug) - trying to do sufficient testing with the minimum number of tests seems like a good goal (but don't just cobble several tests into one needlessly - test one thing at a time)

MarkR
+8  A: 

Yes, indeed it is possible to write excessive amounts of unit tests. For example,

  • testing getters and setters.
  • testing that basic language functionality is works.
Steve McLeod
Language / Platform isn't mentioned. But, if it .NET getters and setters (as well as constructors) can be easily tested with http://www.codeplex.com/classtester, the Automatic Class Tester project from CodePlex.
joseph.ferris
Sure. but I believe that you don't learn much from these tests (see http://stackoverflow.com/questions/108692/is-there-a-java-unit-test-framework-that-auto-tests-getters-and-setters#108711) as the getters and setters will be automatically generated too. No need to generate test for generated code
Olaf
+2  A: 

In unit testing, you would write a test that shows that it is possible to read items from the config files. You'd test any possible quirks so that you have a representative set of tests, e.g. can you read an empty string, or a long string, or a string with escaped characters, can the system distinguish between an empty or missing string.

With that test done, it is not necessary to re-check that capability for every time another class uses the facility you've already tested. Otherwise, for every function you test, you'd have to re-test every operating system feature it relied on. The tests for a given feature only need to test what that feature's code is responsible for getting right.

Sometimes if this is hard to judge, it indicates something that needs refactoring to make the question easier to answer. If you have to write the same test lots of times for different features, this may indicate that those features share something inside them that could be moved out into a single function or class, tested once and then reused.

In broader terms this is an economics question. Assuming you've stopped needless duplicated tests, how much can you afford your tests to be complete? It is effectively impossible to write genuinely complete tests for any non-trivial program due to the combinations of circumstances that can occur, so you have to make the call. Many successful products have taken over the world despite having no unit tests when they originally launched, including some of the most famous desktop applications of all time. They were unreliable, but good enough, and if they'd invested more in reliability then their competitors would have beaten them to first place in market share. (Look at Netscape, who got first place with a product that was notoriously unreliable, and then died out completely when they took time out to do everything the right way). This is not what we as engineers want to hear, and hopefully these days customers are more discerning, but I suspect not by much.

Daniel Earwicker
+4  A: 

It's definitely possible to overdo unit tests, and testing features is a good place to start. But don't overlook testing error handling as well. Your units should respond sensibly when given inputs that don't meet their precondition. If your own code is responsible for the bad input, an assertion failure is a sensible response. If a user can cause the bad input, then you'll need to be unit testing exceptions or error messages.

Every reported bug should result in at least one unit test.

Regarding some of your specifics: I would definitely test my config-file parser to see that it can parse every value of every expected type. (I tend to rely on Lua for config files and parsing, but that still leaves me with some testing to do.) But I wouldn't write a unit test for every entry in the config file; instead I'd write a table-driven test framework that would describe each possible entry and would generate the tests from that. I would probably generate documentation from the same description. I might even generate the parser.

When your app writes entries to a log you are veering into integration tests. A better approach would be to have a separate logging component like syslog. Then you can unit test the logger, put it on the shelf, and reuse it. Or even better, reuse syslog. A short integration test can then tell you whether your app is interoperating correctly with syslog.

In general if you find yourself writing a lot of unit tests, perhaps your units are too large and not orthogonal enough.

I hope some of this helps.

Norman Ramsey
+26  A: 

[Update:] Found the concise answer to this question in TDD ByExample - Pg194.

The simple answer, supplied by Phlip is, "Write tests until fear is transformed into boredom."

[/Update]

I think the problem prevalent in the current times is the lack of unit testing... not excessive testing. I think I see what you're getting at.. I wouldn't term it as excessive unit-testing but rather.. not being smart about where you focus your efforts.

So to answer your question.. some guidelines.

  • If you follow TDD, you'll never have code that is not covered by a unit test.. since you only write (minimal) code to pass a failing unit test and no more. Corollary: Every issue should fail a unit test which pinpoints the location of the defect. The same defect shouldn't cause tens of UTs to break simultaneously
  • Don't test code that you didn't write. A corollary is: you don't test framework code (like reading values from an app.config file) You just assume it works. And how many times have you had framework code breaking? Next to zero.
  • If in doubt, consider the probability of failure and weigh that against the cost of writing an automated test case. Writing test cases for accessors/repetitive data-set testing included.
  • Address the pain. if you find that you're having issues in a certain area periodically, get it under a test harness.. instead of spending time writing redundant tests for areas that you know are pretty solid. e.g. A third party/team library keeps breaking at the interface.. doesn't work like it is supposed to. Mocks won't catch it. Have a regression type suite using the real collaborator and running some sanity tests to verify the link if you know its been a problem child.
Gishu
When Phlip says that the same defect shouldn't cause bunches of other tests to fail, my experience is that that isn't always so. For example, I was using TDD to write an interpreter for a protocol the other day, and testing each protocol command to it together with responses from the mocked up underlying datastream. If a bug was added to break, say, the parsing mechanism so that no responses ever appeared, all those tests would break. However, they would all break in the same place in each test. Does this count, or am I Doing It Wrong?
Kaz Dragon
Phlip just said the first line :) the rest is me shooting from the lip. One broken test per defect : in your case, a defect in your interpreter code should be flagged up by one text. I maybe wrong here... are you saying that there is a bug in a common code block used for all commands? In which case it should be flagged up the low-level test exercising the common code block - which zones in on the area to fix. Tests that build on top of this unit are bound to fail, which is okay. In short, your tests should tell you where the defect is ; instead of you having to debug your tests to find out.
Gishu
Sometimes it makes sense test code you didn't write. You may want to test your *understanding* of the code rather than the code itself. Maybe the framework is flawless, but you misunderstood the purpose of one of the parameters etc. Still, you want to allocate the large majority of tests to code you wrote.
John D. Cook
@John - sure - called "learner tests" i believe. However that is the exception to the guideline here. The purpose of learner tests are to validate your understanding of unknown code via some assertions. Not technically unit testing or TDD per se... more of Test driven learning.
Gishu
+2  A: 

It's very possible, but the problem isn't having too many tests - it's testing stuff you don't care about, or investing too much in testing stuff fewer and simpler tests would have been enough for.

My guiding principle is the level of confidence I have when changing a piece of code: if it will never fail, I won't need the test. If it's straightforward, a simple sanity will do. If it's tricky, I crank up the tests until I feel confident to make changes.

orip
+1  A: 

Excessive unit testing often arises when you use code generation to generate really obvious unit tests. However, since generated unit tests do not really hurt anyone (and do not affect the cost-benefit ratio negatively), I say leave them in - they might come in useful when you least expect it.

Dmitri Nesteruk
+1  A: 

Of course one can overtest the same way one can over-engineer.

As you follow Test-Driven Development, you should gain confidence about your code and stop when you're confident enough. When in doubt, add a new test.

About trivial tests, an eXtreme Programming saying about this is "test everything that could break".

philippe
+2  A: 

I believe that a good test tests a bit of specification. Any test that tests something that is not part of a specification is worthless and should thus be omitted, e.g. testing methods that are just means of implementing the specified functionality of a unit. It is also questionable if it is worthwhile testing truly trivial functionality such as getters and setters, although you never know how long they will be trivial.

The problem with testing according to specification is that many people use tests as specifications, which is wrong for many reasons -- partly since it stops you from being able to actually know what you should test and what not (another important reason is that tests are always testing only some examples, while specifications should always specify behaviour for all possible inputs and states).

If you have proper specifications for your units (and you should), then it should be obvious what needs testing and anything beyond that is superfluous and thus waste.

Peter Becker
+2  A: 

One thing to note, based upon some of the answers given, is that if you find that you are needing to write numerous unit tests to do the same thing over and over, consider refactoring the root cause of the code in question.

Do you need to write a test for everywhere you access a configuration setting? No. You can test it once if you refactor and create a single point of entry for the functionality. I believe in testing as large an amount of functionality as is feasibly possible. But it is really important to realize that if you omit the refactoring step, your code coverage will plummet as you continue to have "one-off" implementations throughout the codebase.

joseph.ferris
+7  A: 

In practice the problem isn't that people write too many tests, it's that they distribute their tests unevenly. Sometimes you'll see people who are new to unit testing write hundreds of tests for the things that are easy to test, but then they run out of steam before they put any tests where they're most needed.

John D. Cook
+1  A: 

At what point should a developer stop writing unit tests and get actual work done?

The point of unit tests - besides providing guidance in design - is to give you feedback on whether you actually did get work done. Remember that old adage: if it doesn't have to work, I'm finished now.

In Lean vocabulary, tests are "necessary waste" - they don't provide any direct value. So the art is in writing only those tests that provide indirect value - by helping us get confidence in that what we produced actually works.

So, the ultimate guide on what tests to write should be your confidence level about the production code. That's where the Extreme Programming mantra "test everything that could possibly break" is coming from - if something could possibly break, we need a test as our safety net, to be able to move quickly in the future, by refactoring with confidence. If something "couldn't possibly break" (as is often said about simple accessors), writing tests for it would be total waste.

Of course you fail in your assessment from time to time. You will need experience to find the right balance. Most importantly, whenever you get a bug report against your code, you should think about what kind of test would have prevented this bug from going out into the wild, and will prevent similar bugs in the future. Then add this kind of test to your collection of tests for code "that could possibly break".

Ilja Preuß
A: 

If two test cases will run exactly the same code, then there's no need to test them separately. e.g., For your example of reading the config file, you only need to test that it is able to correctly read each type of value (and that it fails in the correct manner when asked to read a nonexistent or invalid value).

If you test that it correctly reads in every single value in the config file, then you are testing the config file, not the code.

Dave Sherohman
A: 

If you find yourself spending all your debugging time in the testing routines, you may have gone overboard.

Brian Knoblauch
A: 

The TDD methodology is about designing - having a suite of tests for later is a rather welcome side effect. Test driven development reveals completely different code than having tests "just" as an afterthought.

So when you ask in the TDD context: It's easy to excessively design a solution that is "overengineered". Stop when you have the design fixed enough that it will not slip. No need to bolt it down, strap and cover with cement. You need the design to be flexible enough to be changed during the next refactoring.

My personal story for overengineered testing is an over-mocked test where the implementation of some classes was more or less mirrored in 'expect'-calls to respective mock objects. Talk about resistance to adopt to changed requirements...

Olaf
A: 

I think this rule should also apply to TDD to prevent excessive unit test.

eric2323223
A: 

For determining how much testing effort I put on a program, I define criteria for this testing campaign in terms of what is to be tested: all the branches of the code, all the functions, all the input or output domains, all the features...

Given this, my test work is done when my criteria are entirely covered.

I just need to be aware that certain goals are impossible to reach such as all the program paths or all the input values.