views:

65

answers:

3

In Osherove's great book "The Art of Unit Testing" one of the test anti-patterns is over-specification which is basically the same as testing the internal state of the object instead of some expected output. To my experience, using Isolation frameworks can cause the same unwanted side effects as testing internal behavior because one tends to only implement the behavior necessary to make your stub interact with the object under test. Now if your implementation changes later on (but the contract remains the same), your test will suddenly break because you are expecting some data from the stub which was not implemented.

So what do you think is the best approach to counter this?

1) Implement your stubs/mocks fully, this has the negative side-effect of potentially making your test less readable and also specifying more than necessary to make your test pass.

2) Favor manual, fully implemented fakes.

3) Implement your stubs/fakes so that they make your test just pass, and then deal with the brittleness that this might introduce.

+3  A: 

I do not think you should favor manual testing - unless you prefer to test instead of code.

Instead you have another option - if you test the functionality and not the implementation, try to avoid testing private methods (that can be refactored) and in general write less-fragile tests you'll see that using a mocking/isolation framework does not require you to over specify the system nor does it cause your tests to become more fragile.

In a nutshell - writing fragile tests can be done with or without fakes/mocks and vise-versa.

Dror Helper
I always focus on black-box testing (aka testing end result, not implementation), but that doesn't change the fact that a complex service can interact with multiple instances to perform a job. This again means that you'll (potentially) have to specify a lot to test your end result. Of course you could reuse implementations of the classes which your service interacts with, but than your doing integration testing.
Marius
Having a complex test might mean you're trying to test too many things in a single test - the hint is that you have more than one assert per test.Another solution is to fake external dependencies the same way you test - fake as if you do not know the internal functionality just the required output
Dror Helper
I always use One Assert pr test. Sorry if my question is not well formed, but your suggestion is actually my problem. When you starting faking your dependencies (creating stubs/mocks), how much of the contract of your dependency should you fake? The whole contract or just the part required to make your test pass?
Marius
I guess that the problem was my answer I meant to suggest that you fake as little as possible, I do not think it will make your test more fragile.
Dror Helper
Thats what I've been doing so far with the benefits that keeps my tests to the point and easy to read, but I have experienced some extra code maintenance when doing refactorings due to partial interface implementations (which fakes typically are).
Marius
It seems you're on the right track.If your code changes you might need to refactor your tests as well (mocks as well), You could minimize the amount of refactoring needed by faking as little as possible.
Dror Helper
A: 

I tend to use mocks instead of stubbed/fake objects. I find them a lot less trouble and they are way better at keeping test code under control because it's not cluttered with all sorts of half baked implementations. They also help to clarify what is being tested.

Another advantage is that I only have to address where the class under test needs something specific from the mock. So I don't have to code where it's not important. As for verification, again I only have to very the calls from the class under test to the mock that I care about and consider important aspects of the test.

Derek Clarkson
Just clarify that were using the same semantics here: mocks and stubs are both fakes, the difference is: a fake which you do a verification on: mock, a fake which you do perform a verification on: stub. Any also, over usage of mocks is also considered over-specification. (Are you testing end-result state with your mocks)?
Marius
Ah, no not correct at all. Mock objects are not stubs/fakes. Fakes and stubs are different names for the same thing - An explicitly coded up extension of of a class for the purpose of testing either the class they extend, or injecting into some other class that is being tested to facilitate testing. They exist as an expressly written class in the test code. Mocks on the other hand, are dynamically created by mocking frameworks such as mockito. Mocks have a number of advantages over the stubs/fakes and are not as brittle.
Derek Clarkson
Well then you and me (and Roy Osherove) disagree on the definition of fakes/mocks and stubs :-) It also seems that others agree with me: http://stackoverflow.com/questions/463278/what-is-a-stub (look at the verified answer)
Marius
I think I see the miss-understanding. MF's definitions are fine if you need to go to that level. I'm talking about the coding requirements. Dummys/Fakes/Stubs(DFSs) require coding an extra class. Mocks do not. That makes a fundamental difference to the test code and is a cause of much confusion amongst developers. I've seen project teams refer to DFSs as mocks, and then get all confused when they try to introduce a mocking framework. The reason I disagreed was that you said to implement mocks and stubs. That, to me, indicated a miss-understanding of what mocks are.
Derek Clarkson
A: 

I think, the problem is always the same, although it comes in different flavours: If you have tests that somehow cover the internals of a class, then you will break the tests that cover this internal code.

IMHO there are two ways to deal with that:

  1. Your tests only cover the public contract of a class - a test strategy which is widely adopted for that exact reason: You don't have to change your tests as long as the public contract remains constant. Unfortunately, this is not, what you will have when doing Test-driven development.
  2. If your tests come from a TDD process, then they will regularly cover non-public code. This means that they will break if you change the code. The only way to keep things in sync here is to 'fix' the tests together with the code. This means more maintenance during development. There's no recipe to easily deal with that (other than throw away the test, of course...).

My personal 'way out' is think in terms of 'code elements' rather than just code. A code element consists of three parts: Documentation, test, code. So if you change one part of the element, you have to also adjust the other two - otherwise you leave a broken code element behind.

Thomas Weller
Non-public code should lead to public result, or in other words: you can do TDD and focus on public contracts. There no difference here between TDD and BDD. As long as you use techniques such as DI, you will have classes which interact with others to perform a job and thus the problems I've stated arises.
Marius
You WILL have tests for non-public code! Sorry, but 'Non-public code should lead to public result' is not something that comes from real-life experience. How could you ever test-drive a non-public helper method that way???
Thomas Weller
I don't test the non-public helper metod. I don't care that there actually exists a non-public helper method. Why does the helper exist if that can not be exposed as some public functionality? You can probably create a test-scenarion on the class under test which cover your helper and tests the end-result.
Marius
I'm not talking about covering this or that - that'd be certainly possible. I'm talking about test-driving code, and that's a totally different story...
Thomas Weller