views:

147

answers:

8

I'm fairly new to the unit testing world, and I just decided to add test coverage for my existing app this week.

This is a huge task, mostly because of the number of classes to test but also because writing tests is all new to me.

I've already written tests for a bunch of classes, but now I'm wondering if I'm doing it right.

When I'm writing tests for a method, I have the feeling of rewriting a second time what I already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.

This is just a feeling, and as said earlier, I have no experience of testing. If some more experienced testers out there could give me advices on how to write great tests for an existing app, that would be greatly appreciated.

Edit : I would love to thank Stack Overflow, I had great inputs in less that 15 minutes that answered more of the hours of online reading I just did.

+3  A: 

My tests just seems so tightly bound to the method (testing all codepath, expecting some inner methods to be called a number of times, with certain arguments), that it seems that if I ever refactor the method, the tests will fail even if the final behavior of the method did not change.

I think you are doing it wrong.

A unit test should:

  • test one method
  • provide some specific arguments to that method
  • test that the result is as expected

It should not look inside the method to see what it is doing, so changing the internals should not cause the test to fail. You should not directly test that private methods are being called. If you are interested in finding out whether your private code is being tested then use a code coverage tool. But don't get obsessed by this: 100% coverage is not a requirement.

If your method calls public methods in other classes, and these calls are guaranteed by your interface, then you can test that these calls are being made by using a mocking framework.

You should not use the method itself (or any of the internal code it uses) to generate the expected result dynamically. The expected result should be hard-coded into your test case so that it does not change when the implementation changes. Here's a simplified example of what a unit test should do:

testAdd()
{
    int x = 5;
    int y = -2;
    int expectedResult = 3;

    Calculator calculator = new Calculator();
    int actualResult = calculator.Add(x, y);
    Assert.AreEqual(expectedResult, actualResult);
}

Note that how the result is calculated is not checked - only that the result is correct. Keep adding more and more simple test cases like the above until you have have covered as many scenarios as possible. Use your code coverage tool to see if you have missed any interesting paths.

Mark Byers
Thanks a lot, your answer was the more complete. I now better understand what mock objects are really for : I don't need to assert every call to other methods, just the relevant ones. I also don't need to know HOW things get done, but that they correctly do.
Pixelastic
+4  A: 

It's worth noting that retro-fitting unit tests into existing code is far more difficult than driving the creation of that code with tests in the first place. That's one of the big questions in dealing with legacy applications... how to unit test? This has been asked many times before (so you may be closed as a dupe question), and people usually end up here:

http://stackoverflow.com/questions/167079/moving-existing-code-to-test-driven-development

I second the accepted answer's book recommendation, but beyond that there's more information linked in the answers there.

David
If you write tests first or second, it is both fine, but when writing tests you ensure your code is testable so that you CAN write tests. You wind up thinking "how can I test this" often that in itself causes better code to be written. Retrofitting test cases is always a big no-no. Very hard. Its not a time problem, its a quantity and testability issue. I can't come up to my boss right now and say I want to write test cases for our over a thousand tables and uses, its too much now, would take me a year, and some of the logic/decisions are forgotten. So don't put it off too long :P
Dmitriy Likhten
A: 

Try writing a Unit Test before writing the method it is going to test.

That will definitely force you to think a little differently about how things are being done. You'll have no idea how the method is going to work, just what it is supposed to do.

You should always be testing the results of the method, not how the method gets those results.

Justin Niessner
Yes, I'd love to be able to do that, except that the methods are already written. I just want to test them.I'll write tests before methods in the future, tho.
Pixelastic
+2  A: 

Don't write tests to get full coverage of your code. Write tests that guarantee your requirements. You may discover codepaths that are unnecessary. Conversely, if they are necessary, they are there to fulfill some kind of requirement; find it what it is and test the requirement (not the path).

Keep your tests small: one test per requirement.

Later, when you need to make a change (or write new code), try writing one test first. Just one. Then you'll have taken the first step in test-driven development.

Jon Reid
Thanks, it makes sense to only have small tests for small requirement, one at a time. Lesson learned.
Pixelastic
+1  A: 

Unit testing is about the output you get from a function/method/application. It does not matter at all how the result is produced, it just matters that it is correct. Therefor, your approach of counting calls to inner methods and such is completely wrong ;) What I tend to do is sit down and write down what a method should return given certain input values or a certain environment, then write a test which compares the actual value returned with what I came up with.

x3ro
Thanks ! I had a feeling I was doing it wrong, but having someone actually telling me is better.
Pixelastic
A: 

tests are supposed to improve maintainability. If you change a method and a test breaks that can be a good thing. On the other hand, if you look at your method as a black box than it shouldn't matter what is inside the method. The fact is you need to mock things for some tests, and in those cases you really can't treat the method as a black box. The only thing you can do is wrte an integration test -- you load up a fully instantiated instance of the service under test and have it do its thing like it would running in your app. Then you can treat it as a black box.

When I'm writing tests for a method, I have the feeling of rewriting a second time what I          
already wrote in the method itself.
My tests just seems so tightly bound to the method (testing all codepath, expecting some    
inner methods to be called a number of times, with certain arguments), that it seems that
if I ever refactor the method, the tests will fail even if the final behavior of the   
method did not change.

This is because you are writing your tests after you wrote your code. If you did it the other way around (wrote the tests first) it wouldnt feel this way.

hvgotcodes
Thanks for the black box example, I haven't thought it that way.I wish I discovered unit testing earlier, but unfortunately, that is not the case and I'm stuck with a _legacy_ app to add tests to. Aren't there any way to add tests into an existing project without them feeling broken ?
Pixelastic
Writing tests after is different than writing tests before, so you are stuck with it. however, what you can do is set up the tests so that they fail first, then make them pass by putting your class under test in....do something like that, putting your instance under test in after the test initially fails. Same thing with mocks -- initially the mock has no expectations, and will fail because the method under test will do something with the mock, then make the test pass. I wouldn't be surprised if you find a lot of bugs this way.
hvgotcodes
also, be really specific with your expectations. Dont assert just that the test returns an object, test that the object has various values on it. Test that when a value is supposed to be null, that it is. You can also break it up a bit by doing some refactoring that you meant to do, after you add some tests.
hvgotcodes
+1  A: 

This is the best book for unit testing: http://www.manning.com/osherove/

It explains all the best practices, do's, and dont's for unit testing.

Linx
Thank you, I'll have a look at that. I bought Kent Beck book about TDD but I think I first need to test cover my existing app first.
Pixelastic
+1  A: 

For unit testing, I found both Test Driven (tests first, code second) and code first, test second to be extremely useful.

Instead of writing code, then writing test. Write code then look at what you THINK the code should be doing. Think about all the intended uses of it and then write a test for each. I find writing tests to be faster but more involved than the coding itself. The tests should test the intention. Also thinking about the intentions you wind up finding corner cases in the test writing phase. And of course while writing tests you might find one of the few uses causes a bug (something I often find, and I am very glad this bug did not corrupt data and go unchecked).

Yet testing is almost like coding twice. In fact I had applications where there was more test code (quantity) than application code. One example was a very complex state machine. I had to make sure that after adding more logic to it, the entire thing always worked on all previous use cases. And since those cases were quite hard to follow by looking at the code, I wound up having such a good test suite for this machine that I was confident that it would not break even after making changes, and the tests saved my ass a few times. And as users or testers were finding bugs with the flow or corner cases unaccounted for, guess what, added to tests and never happened again. This really gave users confidence in my work in addition to making the whole thing super stable. And when it had to be re-written for performance reasons, guess what, it worked as expected on all inputs thanks to the tests.

All the simple examples like function square(number) is great and all, and are probably bad candidates to spend lots of time testing. The ones that do important business logic, thats where the testing is important. Test the requirements. Don't just test the plumbing. If the requirements change then guess what, the tests must too.

Testing should not be literally testing that function foo invoked function bar 3 times. That is wrong. Check if the result and side-effects are correct, not the inner mechanics.

Dmitriy Likhten
Nice answer, gave me confidence that writing tests after code can still be useful and possible.
Pixelastic
Just to add to it. In Rails I wind up writing a lot of parrot testing... As in `User` `has_many :homies`, `User` `should_have_many :homies`. It feels like why the hell am I writing this. Sure its easy but its just code duplication. In reality this is a test indicating that some behaviors are EXPECTED in your program. It is an assumption in the rest of the code that the following is true. If it ever becomes false it is an indication that either you did something wrong OR the program needs to be changed. Sometimes testing is not about immediate gains, bur maintenance reduction.
Dmitriy Likhten
A perfect recent example. I had a very simple function. Pass it true, it does one thing, false it does another. VERY SIMPLE. Had like 4 tests checking to make sure the function does what it intends to do. I change the behavior a bit. Run tests, POW a problem. The funny thing is when using the application the problem does not manifest, its only in a complex case that it does. The test case found it and I saved myself hours of headache.
Dmitriy Likhten