views:

164

answers:

7

I want to begin unit testing our application, because I believe that this is the first step to developing a good relationship with testing and will allow me to branch into other forms of testing, most interesting BDD with Cucumber.

We currently generate all of our Base classes using Codesmith which are based entirely on the tables in a database. I am curious as to the benefits of generating test cases with these Base classes? Is this poor testing practices?

This leads me to the ultimate question of my post. What do we test when using Unit Tests?

Do we test the examples we know we want out? or do we test the examples we do not want?

Their can be methods that have multiple ways of Failing and multiple ways of Success, how do we know when to stop?

Take a Summing function for example. Give it 1,2 and expect 3 in the only unit test.. how do we know that 5,6 isn't coming back 35?

Question Recap

  • Generating unit tests (Good/Bad)
  • What/How much do we test?
+7  A: 

Start with your requirements and write tests that test the expected behavior. From that point on, how many other scenarios you test can be driven by your schedule, or maybe by your recognizing non-success scenarios that are particularly high-risk.

You might consider writing non-success tests only in response to defects you (or your users) discover (the idea being that you write a test that tests the defect fix before you actually fix the defect, so that your test will fail if that defect is re-introduced into your code in future development).

lance
`Requirements` has good chances of being misunderstood here. this is _unit testing_ testing whereby you test individual units of code (individual procedures and the like) and where [application-level / customer driven] requirements are far remote. Sure, as programmers we can make "requirements" for each of the I/O / "behavior" of elementary functions but that's not how the word "requirements" is typically understod.
mjv
Good clarification.
lance
This answer is also extremely useful description! Upvoted.
James Armstead
+6  A: 

The point of unit tests is to give you confidence (but only in special cases does it give you certainty) that the actual behavior of your public methods matches the expected behavior. Thus, if you have a class Adder

class Adder { public int Add(int x, int y) { return x + y; } }

and a corresponding unit test

[Test]
public void Add_returns_that_one_plus_two_is_three() {
    Adder a = new Adder();
    int result = a.Add(1, 2);
    Assert.AreEqual(3, result);
}

then this gives you some (but not 100%) confidence that the method under test is behaving appropriately. It also gives you some defense against breaking the code upon refactoring.

What do we test when using Unit Tests?

The actual behavior of your public methods against the expected (or specified) behavior.

Do we test the examples we know we want out?

Yes, one way to gain confidence in the correctness of your method is to take some input with known expected output, execute the public method on the input and compare the acutal output to the expected output.

Jason
+1  A: 

1) To start, i'd recommend you to test your app's core logic.

2) Then, use code coverage tool in vs to see whether all of your code is used in tests(all branches of if-else, case conditions are invoked). This is some sort of an answer to your question about testing 1+2 = 3, 5 + 6 = 35: when code is covered, you can feel safe with further experiments.

3)It's a good practice to cover 80-90% of code: the rest of work is usually unefficient: getters-setters, 1-line exception handling, etc.

4) Learn about separation of concerns.

5) Generation unit tests - try it, you'll see, that you can save a pretty lines of code writing them manually. I prefer generating the file with vs, then write the rest TestMethods by myself.

portland
I guess more or less my Generation of Unit tests would be generated with the Assertions already there. They would be testing Getters/Setters. I have read that testing Getters/Setters is mostly considered a good thing. Typically in the context of "If one is broke, I would DEFINITELY want to know."Thoughts?
James Armstead
Imagine you have a function with 10 parameters, and you need to test it for 9 different exceptions, thrown by changing 1 parameter.If you will use generated tests, you will get 9*(10+..) ~= 150 lines of code, most of which will be the same. Isn't that a good idea to create a method helper(with 1 parameter), which invokes yours(with 10 parameters), then just invoke the one 9 times, improving readability of yoyr test? On the other hand, its tricky enough to test private methods, writing you own accessor. Of course that should be done automatically.i hope, my ideas will help you.
portland
I've found that if my getters and setters have enough side effects to require significant independent testing, I'm already setting myself up for pain.
Dan Bryant
of course i was talking about simple getters-setters without side effects.
portland
+3  A: 

What to test: Everything that has ever gone wrong.

When you find a bug, write a test for the buggy behavior before you fix the code. Then, when the code is working correctly, the test will pass, and you'll have another test in your arsenal.

Andy Lester
+2  A: 

You unittest things where you

  • want to make sure your algorithm works
  • want to safeguard against accidental changes in the future

So in your example it would not make much sense to test the generated classes. Test the generator instead.

It's good practice to test the main use cases (what the tested function was designed for) first. Then you test the main error cases. Then you write tests for corner cases (i.e. lower and upper bounds). The unusual error cases are normally so hard to produce that it doesn't make sense to unit-test them.

If you need to verify a large range of parameter sets, use data-driven testing.

How many things you test is a matter of effort vs. return, so it really depends on the individual project. Normally you try to follow the 80/20 rule, but there may be applications where you need more test coverage because a failure would have very serious consequences.

You can dramatically reduce the time you need to write tests if you use a test-driven approach (TDD). That's because code that isn't written with testability in mind is much harder, sometimes near to impossible to test. But since nothing in life is free, the code developed with TDD tends to be more complex itself.

TToni
+1  A: 

I'm also beginning the process of more consistently using unit tests and what I've found is that the biggest task in unit testing is structuring my code to support testing. As I start to think about how to write tests, it becomes clear where classes have become overly coupled, to the point that the complexity of the 'unit' makes defining tests difficult. I spend as much or more time refactoring my code as I do writing tests. Once the boundaries between testable units become clearer, the question of where to start testing resolves itself; start with your smallest isolated dependencies (or at least the ones you're worried about) and work your way up.

Dan Bryant
+1  A: 

There are three basic events I test for: min, max, and somewhere between min and max.

And where appropriate two extremes: below min, and above max.

There are obvious exceptions (some code may not have a min or max for example) but I've found that unit testing for these events is a good start and captures a majority of "common" issues with the code.

TheGreatAvatar