views:

1570

answers:

17

The thing I've found about TDD is that its takes time to get your tests set up and being naturally lazy I always want to write as little code as possible. The first thing I seem do is test my constructor has set all the properties but is this overkill?

My question is to what level of granularity do you write you unit tests at?

..and is there a case of testing too much?

+9  A: 

The classic answer is "test anything that could possibly break". I interpret that as meaning that testing setters and getters that don't do anything except set or get is probably too much testing, no need to take the time. Unless your IDE writes those for you, then you might as well.

If your constructor not setting properties could lead to errors later, then testing that they are set is not overkill.

Dennis S.
yup and this is a bind for a class with many properties and many consturctors.
John Nolan
The more trivial a problem is (like forgetting to init a member to zero), the more time it'll take to debug it.
Lev
A: 

Generally, I start small, with inputs and outputs that I know must work. Then, as I fix bugs, I add more tests to ensure the things I've fixed are tested. It's organic, and works well for me.

Can you test too much? Probably, but it's probably better to err on the side of caution in general, though it'll depend on how mission-critical your application is.

Tim Sullivan
+12  A: 

Everything should be made as simple as possible, but not simpler. - A. Einstein

One of the most misunderstood things about TDD is the first word in it. Test. That's why BDD came along. Because people didn't really understand that the first D was the important one, namely Driven. We all tend to think a little bit to much about the Testing, and a little bit to little about the driving of design. And I guess that this is a vague answer to your question, but you should probably consider how to drive your code, instead of what you actually are testing; that is something a Coverage-tool can help you with. Design is a quite bigger and more problematic issue.

kitofr
Yeah it its vague... Does this mean as a constructor is not part behaviour we shouldn't be testing it. But I should be testing the MyClass.DoSomething()?
John Nolan
Well, depends on :P... a construction test is often a good start when trying to test legacy code. But I would probably (in most cases) leave a construction test out, when starting to design something from scratch.
kitofr
It's driven development, not driven design. Meaning, get a working baseline, write tests to verify functionality, move forward with development. I almost always write my tests right before I factor some code for the first time.
Evan Plaice
A: 

I think you must test everything in your "core" of your business logic. Getter ans Setter too because they could accept negative value or null value that you might do not want to accept. If you have time (always depend of your boss) it's good to test other business logic and all controller that call these object (you go from unit test to integration test slowly).

Daok
A: 

I don't unit tests simple setter/getter methods that have no side effects. But I do unit test every other public method. I try to create tests for all the boundary conditions in my algorthims and check the coverage of my unit tests.

Its a lot of work but I think its worth it. I would rather write code (even testing code) than step through code in a debugger. I find the code-build-deploy-debug cycle very time consuming and the more exhaustive the unit tests I have integrated into my build the less time I spend going through that code-build-deploy-debug cycle.

You didn't say why architecture you are coding too. But for Java I use Maven 2, JUnit, DbUnit, Cobertura, & EasyMock.

bmatthews68
I didn't say which as its a fairly language-agnostic question.
John Nolan
Unit testing in TDD does not only cover you as you are writing the code, it also protects against the person who inherets your code and then thinks it makes sense to format a value inside the getter!
Paxic
+5  A: 

I write tests to cover the assumptions of the classes I will write. The tests enforce the requirements. Essentially, if x can never be 3, for example, I'm going to ensure there is a test that covers that requirement.

Invariably, if I don't write a test to cover a condition, it'll crop up later during "human" testing. I'll certainly write one then, but I'd rather catch them early. I think the point is that testing is tedious (perhaps) but necessary. I write enough tests to be complete but no more than that.

itsmatt
+10  A: 

Write unit tests for things you expect to break, and for edge cases. After that, test cases should be added as bug reports come in - before writing the fix for the bug. The developer can then be confident that:

  1. The bug is fixed;
  2. The bug won't reappear.

Per the comment attached - I guess this approach to writing unit tests could cause problems, if lots of bugs are, over time, discovered in a given class. This is probably where discretion is helpful - adding unit tests only for bugs that are likely to re-occur, or where their re-occurrence would cause serious problems. I've found that a measure of integration testing in unit tests can be helpful in these scenarios - testing code higher up codepaths can cover the codepaths lower down.

Dominic Rodger
With the amount of bugs that I write this can become an anti pattern. With 100s of tests on code where things have broken this can mean that your tests become unreadable and when the time comes to rewrite those tests it can become an overhead.
John Nolan
+1  A: 

I make unit test to reach the maximum feasible coverage. If I cannot reach some code, I refactor until the coverage is as full as possible

After finished to blinding writing test, I usually write one test case reproducing each bug

I'm used to separate between code testing and integration testing. During integration testing, (which are also unit test but on groups of components, so not exactly what for unit test are for) I'll test for the requirements to be implemented correctly.

Lorenzo Boccaccia
+3  A: 

Test Driven Development means that you stop coding when all your tests pass.

If you have no test for a property, then why should you implement it? If you do not test/define the expected behaviour in case of an "illegal" assignment, what should the property do?

Therefore I'm totally for testing every behaviour a class should exhibit. Including "primitive" properties.

To make this testing easier, I created a simple NUnit TestFixture that provides extension points for setting/getting the value and takes lists of valid and invalid values and has a single test to check whether the property works right. Testing a single property could look like this:

[TestFixture]
public class Test_MyObject_SomeProperty : PropertyTest<int>
{

    private MyObject obj = null;

    public override void SetUp() { obj = new MyObject(); }
    public override void TearDown() { obj = null; }

    public override int Get() { return obj.SomeProperty; }
    public override Set(int value) { obj.SomeProperty = value; }

    public override IEnumerable<int> SomeValidValues() { return new List() { 1,3,5,7 }; }
    public override IEnumerable<int> SomeInvalidValues() { return new List() { 2,4,6 }; }

}

Using lambdas and attributes this might even be written more compactly. I gather MBUnit has even some native support for things like that. The point though is that the above code captures the intent of the property.

P.S.: Probably the PropertyTest should also have a way of checking that other properties on the object didn't change. Hmm .. back to the drawing board.

David Schmitt
I went to a presentation on mbUnit. It look's great.
John Nolan
But David, let me ask you: were you surprised by Kent Beck's response above? Does his answer make you wonder if you should re-think your approach? Not because anyone has "answers from on high", of course. But Kent is thought of as one of the core proponents of test first. Penny for your thoughts!
Charlie Flowers
@Charly: Kent's response is very pragmatic. I'm "just" working on a project where I'll be integrating code from various sources and I'd like to provide a _very_ high level of confidence.
David Schmitt
That said, I do strive to have tests that are simpler than the tested code and this level of detail might only be worth it in integration tests where all generators, modules, business rules and validators come together.
David Schmitt
+3  A: 

Part of the problem with skipping simple tests now is in the future refactoring could make that simple property very complicated with lots of logic. I think the best idea is that you can use Tests to verify requirements for the module. If when you pass X you should get Y back, then that's what you want to test. Then when you change the code later on, you can verify that X gives you Y, and you can add a test for A gives you B, when that requirement is added later on.

I've found that the time I spend during initial development writing tests pays off in the first or second bug fix. The ability to pick up code you haven't looked at in 3 months and be reasonably sure your fix covers all the cases, and "probably" doesn't break anything is hugely valuable. You also will find that unit tests will help triage bugs well beyond the stack trace, etc. Seeing how individual pieces of the app work and fail gives huge insight into why they work or fail as a whole.

Matt
+49  A: 

I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (I suspect this level of confidence is high compared to industry standards, but that could just be hubris). If I don't typically make a kind of mistake (like setting the wrong variables in a constructor), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.

Different people will have different testing strategies based on this philosophy, but that seems reasonable to me given the immature state of understanding of how tests can best fit into the inner loop of coding. Ten or twenty years from now we'll likely have a more universal theory of which tests to write, which tests not to write, and how to tell the difference. In the meantime, experimentation seems in order.

Kent Beck
The world does not think that Kent Beck would say this! There are legions of developers dutifully pursuing 100% coverage because they think it is what Kent Beck would do! I have told many that you said, in your XP book, that you don't always adhere to Test First religiously. But I'm surprised too.
Charlie Flowers
(con't) I have argued for less "extreme" test-first development in some cases, but I think on the whole I may be in the habit of doing too much of it myself currently. I mean, how can you argue with the statement that the goal should be to reach a reasonable level of confidence?
Charlie Flowers
+3  A: 

In most instances, I'd say, if there is logic there, test it. This includes constructors and properties, especially when more than one thing gets set in the property.

With respect to too much testing, it's debatable. Some would say that everything should be tested for robustness, others say that for efficient testing, only things that might break (i.e. logic) should be tested.

I'd lean more toward the second camp, just from personal experience, but if somebody did decide to test everything, I wouldn't say it was too much... a little overkill maybe for me, but not too much for them.

So, No - I would say there isn't such a thing as "too much" testing in the general sense, only for individuals.

Fry
+1  A: 

So the more I drive my programming by writing tests, the less I worry about the level of granuality of the testing. Looking back it seems I am doing the simplest thing possible to achieve my goal of validating behaviour. This means I am generating a layer of confidence that my code is doing what I ask to do, however this is not considered as absolute guarantee that my code is bug free. I feel that the correct balance is to test standard behaviour and maybe an edge case or two then move on to the next part of my design.

I accept that this will not cover all bugs and use other traditional testing methods to capture these.

John Nolan
A: 

The more I read about it the more I think some unit tests are just like some patterns: A smell of insufficient languages.

When you need to test whether your trivial getter actually returns the right value, it is because you may intermix getter name and member variable name. Enter 'attr_reader :name' of ruby, and this can't happen any more. Just not possible in java.

If your getter ever gets nontrivial you can still add a test for it then.

I agree that testing a getter is trivial. However I may be stupid enough to forget to set it within a constructor. Therefore a test is needed. My thoughts have changed since I asked the question. See my answer http://stackoverflow.com/questions/153234/how-deep-are-your-unit-tests/396138#396138
John Nolan
Actually, I'd argue that in some way, unit tests as a whole are a smell of a language problem.Languages that support contracts (pre/post conditions on methods) like Eiffel, still need some unit tests, but they do need less of them. In practice, even simple contracts make it really easy to locate bugs: when a method's contract breaks, the bug is usually in that method.
Damien Pollet
@Damien: Perhaps unit tests and contracts are really the same thing in disguise? What I mean is, a language that "supports" contracts basically just makes it easy to write snippets of code -- tests -- that are (optionally) executed before and after other snippets of code, correct? If its grammar is simple enough, a language which doesn't natively support contracts can be easily extended to support them by writing a preprocessor, correct? Or are there some things that one approach (contracts or unit tests) can do that the other just can't?
j_random_hacker
+4  A: 

To those who propose testing "everything": realise that "fully testing" a method like int square(int x) requires about 4 billion test cases in common languages and typical environments.

In fact, it's even worse than that: a method void setX(int newX) is also obliged not to alter the values of any other members besides x -- are you testing that obj.y, obj.z, etc. all remain unchanged after calling obj.setX(42);?

It's only practical to test a subset of "everything." Once you accept this, it becomes more palatable to consider not testing incredibly basic behaviour. Every programmer has a probability distribution of bug locations; the smart approach is to focus your energy on testing regions where you estimate the bug probability to be high.

j_random_hacker
A: 

Test the source code that makes you worried about it.

Is not useful to test portions of code in which you are very very confident with, as long as you don't make mistakes in it.

Test bugfixes, so that it is the first and last time you fix a bug.

Test to get confidence of obscure code portions, so that you create knowledge.

Test before heavy and medium refactoring, so that you don't break existing features.

egapotz
A: 

This answer is more for figuring out how many unit tests to use for a given method you know you want to unit test due to its criticality/importance. Using Basis Path Testing technique by McCabe, you could do the following to quantitatively have better code coverage confidence than simple "statement coverage" or "branch coverage":

  1. Determine Cyclomatic Complexity value of your method that you want to unit test (Visual Studio 2010 Ultimate for example can calculate this for you with static analysis tools; otherwise, you can calculate it by hand via flowgraph method - http://users.csc.calpoly.edu/~jdalbey/206/Lectures/BasisPathTutorial/index.html)
  2. List the basis set of independent paths that flow thru your method - see link above for flowgraph example
  3. Prepare unit tests for each independent basis path determined in step 2
JD