views:

203

answers:

3

I am a heavy advocate of proper Test Driven Design or Behavior Driven Design and I love writing tests. However, I keep coding myself into a corner where I need to use 3-5 mocks in a particular test case for a single class. No matter which way I start, top down or bottom up I end up with a design that requires at least three collaborators from the highest level of abstraction. Can somebody give good advice on how to avoid this pitfall? Here's a typical scenario. I design a Widget that produces a Midget from a given text value. It always starts really simple until I get into the details. My Widget must interact with several hard to test things like file systems, databases, and the network. So instead of designing all that into my Widget I make a Bridget collaborator. The Bridget takes care of one half of the complexity, the database and network, allowing me to focus on the other half which is multimedia presentation. So then I make a Gidget that performs the multimedia piece. The entire thing needs to happen in the background so now I include a Thridget to make that happen. When all is said and done I end up with a Widget that hands work to a Thridget which talks over a Bridget to give its result to a Gidget.

Because I'm working in CocoaTouch and trying to avoid mock objects I use the self-shunt pattern where abstractions over collaborators become protocols that my test adopts. With 3+ collaborators my test balloons and become too complicated. Even using something like OCMock mock objects leaves me with an order of complexity that I'd rather avoid. I tried wrapping my brain around a daisy-chain of collaborators (A delegates to B who delegates to C and so on) but I can't envision it.

Edit Taking an example from below let's assume we have an object that must read/write from sockets and present the movie data returned.

//Assume myRequest is a String param...
InputStream   aIn  = aSocket.getInputStram();
OutputStream  aOut = aSocket.getOutputStram();
DataProcessor aProcessor = ...;

// This gets broken into a "Network" collaborator.
for(stuff in myRequest.charArray()) aOut.write(stuff);
Object Data = aIn.read(); // Simplified read

//This is our second collaborator
aProcessor.process(Data);

Now the above obviously deals with network latency so it has to be Threaded. This introduces a Thread abstraction to get us out of the practice of threaded unit tests. We now have

AsynchronousWorker myworker = getWorker(); //here's our third collaborator
worker.doThisWork( new WorkRequest() {
//Assume myRequest is a String param...
DataProcessor aProcessor = ...;

// Use our "Network" collaborator.
NetworkHandler networkHandler = getNetworkHandler();
Object Data = networkHandler.retrieveData(); // Simplified read

//This is our multimedia collaborator
aProcessor.process(Data);
})

Forgive me for working backwards w/o tests but I'm about to take my daughter outside and I'm rushing thru the example. The idea here is that I'm orchestrating the collaboration of several collaborators from behind a simple interface that will get tied to a UI button click event. So the outter-most test reflects a Sprint task that says given a "Play Movie" button, when it is clicked, the movie will play. Edit Lets discuss.

+1  A: 

I do some fairly complete testing, but it's automated integration testing and not unit-testing, so I have no mocks (except the user: I mock the end-user, simulating user-input events and testing/asserting whatever's output to the user): Should one test internal implementation, or only test public behaviour?


What I'm looking for is best practices using TDD.

Wikipedia describes TDD as,

a software development technique that relies on the repetition of a very short development cycle: First the developer writes a failing automated test case that defines a desired improvement or new function, then produces code to pass that test and finally refactors the new code to acceptable standards.

It then goes on to prescribe:

  1. Add a test
  2. Run all tests and see if the new one fails
  3. Write some code
  4. Run the automated tests and see them succeed
  5. Refactor code

I do the first of these, i.e. "very short development cycle", the difference in my case being that I test after it's written.

The reason why I test after it's written is so that I don't need to "write" any tests at all, even the integration tests.

My cycle is something like:

  1. Rerun all automated integration tests (start with a clean slate)
  2. Implement a new feature (with refactoring of the existing code if necessary to support the new feature)
  3. Rerun all automated integration tests (regression testing to ensure that new development hasn't broken existing functionality)
  4. Test the new functionality:

    a. End-user (me) does user input via the user interface, intended to exercise the new feature

    b. End-user (me) inspects the corresponding program output, to verify whether the output is correct for the given input

  5. When I do the testing in step 4, the test environment captures the user input and program output into a data file; the test environment can replay such a test in the future (recreate the user input, and assert whether the corresponding output is the same as the expected output captured previously). Thus, the test cases which were run/created in step 4 are added to the suite of all automated tests.

I think this gives me the benefits of TDD:

  • Testing is coupled with development: I test immediately after coding instead of before coding, but in any case the new code is tested before it's checked in; there's never untested code.

  • I have automated test suites, for regression testing

I avoid some costs/disadvantages:

  • Writing tests (instead I create new tests using the UI, which is quicker and easier, and closer to the original requirements)

  • Creating mocks (required for unit testing)

  • Editing tests when the internal implementation is refactored (because the tests depend only on the public API and not on the internal implementation details).

ChrisW
Thanks for replying. What I'm looking for is best practices using TDD. I'm currently trying to unroll a design with 3-4 collaborators to make it easier to follow both the test and the design.
Cliff
I added to my answer, to compare my approach wih TDD.
ChrisW
I should be more specific. I wasn't actually looking for a definition or explanation of TDD. Rather, I'm struggling with how to correctly apply it in my given scenario above.
Cliff
You said "I keep coding myself into a corner where I need to use 3-5 mocks", and I said what I did to give an alternative where there's still immediate, automated testing but no mocks required at all.
ChrisW
I see completely different benefits of TDD. The main benefit comes from the last D, design. I seriously don't care whether my code is tested for bugs or not. I'm primarily concerned with the design, where starting with a test pays the biggest dividends. Catching bugs happens completely by accident and its not something I plan for. If you've never designed an object with test I suggest you try. You write far less test and implementation code.
Cliff
A: 

My solution (not CocoaTouch) is to continue to mock the objects, but to refactory mock set up to a common test method. This reduces the complexity of the test itself while retaining the mock infrastructure to test my class in isolation.

tvanfosson
+2  A: 

Having many mock objects shows that:

1) You have too much dependencies. Re-look at your code and try to break it further down. Especially, try to separate data transformation and processing.

Since I don't have experience in the environment you are developing in. So let me give my own experience as example.

In Java socket, you will be given a set of InputStream and OutputStream simple so that you can read data from and send data to your peer. So your program looks like this:

InputStream  aIn  = aSocket.getInputStram();
OutputStream aOut = aSocket.getOutputStram();

// Read data
Object Data = aIn.read(); // Simplified read
// Process
if (Data.equals('1')) {
   // Do something
   // Write data
   aOut.write('A');
} else {
   // Do something else 
   // Write another data
   aOut.write('B');
}

If you want to test this method, you have to ends up create mock for In and Out which may require quite a complicated classes behind them for supporting.

But if you look carefully, read from aIn and write to aOut can be separated from processing it. So you can create another class which will takes the read input and return output object.

public class ProcessSocket {
    public Object process(Object readObject) {
        if (readObject.equals(...)) {
       // Do something
       // Write data
       return 'A';
    } else {
       // Do something else 
       // Write another data
       return 'B';
   }
}

and your previous method will be:

InputStream   aIn  = aSocket.getInputStram();
OutputStream  aOut = aSocket.getOutputStram();
ProcessSocket aProcessor = ...;

// Read data
Object Data = aIn.read(); // Simplified read
aProcessor.process(Data);

This way you can test the processing with little need for mock. you test can goes:


ProcessSocket aProcessor = ...;
assert(aProcessor.process('1').equals('A'));

Becuase the processing is now independent from input, output and even socket.

2) You are over unit testing by unit test what should be integration tested.

Some tests are not for unit testing (in the sense that it require unnecessarily more effort and may not efficiently get a good indicator). Examples of these kind of tests are those involving concurrency and user interfaces. They require different ways of testing than unit testing.

My advice would be that you further break them down (similar to the technique above) until some of them are unit-test suitable. So you have the little hard-to-test parts.

EDIT

If you believe you already broken it into very fine pieces, perhaps, that is your problem.

Software components or sub-components are related to each other in some way like characters are combined to words, words are combined to sentences, sentences to paragraphs, paragraphs to subsection, section, chapters and so on.

My example says, your should broken subsection to paragraphs and you things you already downs to words.

Look at it this way, most of the time, paragraphs are related to other paragraphs in a less loosely degree than sentences related (or depends on) other sentences. Subsection, section are even more loosely while words and characters are more dependent (as the grammatical rules kick in).

So perhaps, you are breaking it so fine that the language syntax force to those dependencies and in turn forcing you to have so much mock objects.

If that is the case, your solution is to balance the test. If a part are depended by many and it is require a complex set of mock object (or simple more effort to test it). May be you don't need to test it. For example, If A uses B,C uses B and B is so damn hard to test. So why don't you just test A+B as one and C+B as anther. In my example, if SocketProcessor is so hard to test, too hard to the point that you will spend more time testing and maintain the tests more than developing it then it is not worth it and I will just test the whole things at once.

Without seeing your code (and with the fact that I am never develop CocaoTouch) it will be hard to tell. And I may be able to provide good comment here. Sorry :D.

EDIT 2 See your example, it is pretty clear that you are dealing with integration issue. Assuming that you already test play movie and UI separatedly. It is understandable why you need so much mock objects. If this is the first time you use these kind of integration structure (this concurrent pattern), then those mock objects may actually be needed and there is nothing much you can do about it. That's all I can say :-p

Hope this helps.

NawaMan
Some good points there. :)
Mark Simpson
I have thing broken down very fine grained. Like I said, the difficult things to test are walled behind abstractions. My biggest problem is high level which is in contrast to your lower-level example. The test around the high level object requires lots of collaborators. Maybe I should edit my question with a more concrete example?
Cliff
NawaMan, Thank you for taking the time to work with me through this. Yes, my issue is part integration and part design. From the top down the design begins simple. Given an input to a specific software component I expect a reaction. The input is simple but then the design always explodes immediately into 3-5 things from the top. If I start bottom up then I end up with 3-5 things that need to be tied into the top abstraction. I feel like I'm missing something fundamental.
Cliff
You are welcome.
NawaMan