views:

156

answers:

5

Mock frameworks, e.g. EasyMock, make it easier to plugin dummy dependencies. Having said that, using them for ensuring how different methods on particular components are called (and in what order) seems bad to me. It exposes the behaviour to test class, which makes it harder to maintain production code. And I really don't see the benefit; mentally I feel like I've been chained to a heavy ball.

I much rather like to just test against interface, giving test data as input and asserting the result. Better yet, to use some testing tool that generates test data automatically for verifying given property. e.g. adding one element to a list, and removing it immediately yields the same list.

In our workplace, we use Hudson which gives testing coverage. Unfortunately it makes it easy to get blindly obsessed that everything is tested. I strongly feel that one shouldn't test everything if one wants to be productive also in maintenance mode. One good example would be controllers in web frameworks. As generally they should contain very little logic, testing with mock framework that controller calls such and such method in particular order is nonsensical in my honest opinion.

Dear SOers, what are your opinions on this?

+1  A: 

I'd asked a similar question How Much Unit Testing is a Good Thing, which might help give an idea of the variety of levels of testing people feel are appropriate.

Dean J
+1  A: 
  1. What is the probability that during your code's maintenance some junior employee will break the part of code that runs "controller calls such and such method in particular order"?

  2. What is the cost to your organization if such a thing occurs - in production outage, debugging/fixing/re-testing/re-release, legal/financial risk, reputation risk, etc...?

Now, multiply #1 and #2 and check whether your reluctance to achieve a reasonable amount of test coverage is worth the risk.

Sometimes, it will not be (this is why in testing there's a concept of a point of diminishing returns).

E.g. if you maintain a web app that is not production critical and has 100 users who have a workaround if the app is broken (and/or can do easy and immediate rollback), then spending 3 months doing full testing coverage of that app is probably non-sensical.

If you work on an app where a minor bug can have multi-million-dollar or worse consequences (think space shuttle software, or guidance system for a cruise missile), then the thorough testing with complete coverage becomes a lot more sensical.

Also, i'm not sure if i'm reading too much into your question but you seem to be implying that having mocking-enabled unit testing somehow excluds application/integration functional testing. If that is the case, you are right to object to such a notion - the two testing approaches must co-exist.

DVK
+1 for risk analysis.
TrueWill
+2  A: 

I agree - I'm in favor of leaning heavily towards state verification rather than behavior verification (a loose interpretation of classical TDD while still using test doubles).

The book The Art of Unit Testing has plenty of good advice in these areas.

100% test coverage, GUI testing, testing getters/setters or other no-logic code, etc. seem unlikely to provide good ROI. TDD will provide high test coverage in any case. Test what might break.

TrueWill
Unless your getters/setters suddenly get re-written to change the logic and in doing so are broken (after all, the whole point of having a getter/setter method as opposed to a public member is so that their implementation can change)... then you start wishing you tested your getters/setters
DVK
BTW, my comment above is based on a REAL situation when that happened and I was incredibly glad I *did* have the tests for those.
DVK
+1 for the book link
DVK
@DVK - Yes, if a getter/setter has logic it should be tested.
TrueWill
+1  A: 

It depends on how you model the domain(s) of your program.

If you model the domains in terms of data stored in data structures and methods that read data from one data structure and store derived data in another data structure (procedures or functions depending how procedural or functional your design is), then mock objects are not appropriate. So called "state-based" testing is what you want. The outcome you care about is that a procedure puts the right data in the right variables and what it calls to make that happen is just an implementation detail.

If you model the domains in terms of message-passing communication protocols by which objects collaborate, then the protocols are what you care about and what data the objects store to coordinate their behaviour in the protocols in which they play roles is just implementation detail. In that case, mock objects are the right tool for the job and state based testing ties the tests too closely to unimportant implementation details.

And in most object-oriented programs there is a mix of styles. Some code will be written purely functional, transforming immutable data structures. Other code will be coordinating the behaviour of objects that change their hidden, internal state over time.

As for high test coverage, it really doesn't tell you that much. Low test coverage shows you where you have inadequate testing, but high test coverage doesn't show you that the code is adequately tested. Tests can, for example, run through code paths and so increase the coverage stats but not actually make any assertions about what those code paths did. Also, what really matters is how different parts of the program behave in combination, which unit test coverage won't tell you. If you want to verify that your tests really are testing your system's behaviour adequately you could use a Mutation Testing tool. It's a slow process, so it's something you'd run in a nightly build rather than on every check-in.

Nat
+2  A: 

I read 2 questions:

What is your opinion on testing that particular methods on components are called in a particular order?

I've fallen foul of this in the past. We use a lot more "stubbing" and a lot less "mocking" these days. We try to write unit tests which test only one thing. When we do this it's normally possible to write a very simple test which stubs out interactions with most other components. And we very rarely assert ordering. This helps to make the tests less brittle.

Tests which test only one thing are easier to understand and maintain.

Also, if you find yourself having to write lots of expectations for interactions with lots of components there could well be a problem in the code you're testing anyway. If it's difficult to maintain tests the code you're testing can often be refactored.

Should one be obsessed with test coverage?

When writing unit tests for a given class I'm pretty obsessed with test coverage. It makes it really easy to spot important bits of behaviour that I haven't tested. I can also make a judgement call about which bits I don't need to cover.

Overall unit test coverage stats? Not particularly interested so long as they're high.

100% unit test coverage for an entire system? Not interested at all.

Joe Field