views:

127

answers:

3

In integration tests, asynchronous processes (methods, external services) make for a very tough test code. If instead, I factored out the async part and create a dependency and replace it with a synchronous one for the sake of testing, would that be a "good thing"?

By replacing the async process with a synchronous one, am I not testing in the spirit of integration testing? I guess I'm assuming that integration testing refers to testing close to the real thing.

+1  A: 

We have a number of automated unit tests that send off asynchronous requests and need to test the output/results. The way we handle it is to actually perform all of testing as if it were part of the actual application, in other words asynchronous requests remain asynchronous. But the test harness acts synchronously: It sends off the asynchronous request, sleeps for [up to] a period of time (the maximum in which we would expect a result to be produced), and if still no result is available, then the test has failed. There are callbacks, so in almost all cases the test is awakened and continues running before the timeout has expired, but the timeouts mean that a failure (or change in expected performance) will not stall/halt the entire test suite.

This has a few advantages:

  • The unit test is very close to the actual calling patters of the application
  • No new code/stubs are needed to make the application code (the code being tested) run synchronously
  • Performance is tested implicitly: If the test slept for too short a period, then some performance characteristic has changed, and that needs looking in to

The last point may need a small amount of explanation. Performance testing is important, and it is often left out of test plans. The way these unit tests are run, they end up taking a lot longer (running time) than if we had rearranged the code to do everything synchronously. However this way, performance is tested implicitly, and the tests are more faithful to their usage in the application. Plus all of our message queueing infrastructure gets tested "for free" along the way.

Edit: Added note about callbacks

Adam Batkin
A: 

What are you testing? The behaviour of your class in response to certain stimuli? In which case don't suitable mocks do the job?

Class Orchestrator implements AsynchCallback {

    TheAsycnhService myDelegate;  // initialised by injection

    public void doSomething(Request aRequest){
          myDelegate.doTheWork(aRequest, this)
    }

    public void tellMeTheResult(Response aResponse) {
          // process response
    }
}

Your test can do something like

 Orchestrator orch = new Orchestrator(mockAsynchService);

 orch.doSomething(request);

 // assertions here that the mockAsychService received the expected request

 // now either the mock really does call back
 // or (probably more easily) make explicit call to the tellMeTheResult() method

 // assertions here that the  Orchestrator did the right thing with the response

Note that there's no true asynch processing here, and the mock itself need have no logic other than to allow verification of the receipt of the correct request. For a Unit test of the Orchestrator this is sufficient.

I used this variation on the idea when testing BPEL processes in WebSphere Process Server.

djna
+4  A: 

Nice question.

In a unit test this approach would make sense but for integration testing you should be testing the real system as it will behave in real-life. This includes any asynchronous operations and any side-effects they may have - this is the most likely place for bugs to exist and is probably where you should concentrate your testing not factor it out.

I often use a "waitFor" approach where I poll to see if an answer has been received and timeout after a while if not. A good implementation of this pattern, although java-specific you can get the gist, is the JUnitConditionRunner. For example:

conditionRunner = new JUnitConditionRunner(browser, WAIT_FOR_INTERVAL, WAIT_FOR_TIMEOUT);   

protected void waitForText(String text) {
    try {
        conditionRunner.waitFor(new Text(text));
    } catch(Throwable t) {
        throw new AssertionFailedError("Expecting text " + text + " failed to become true. Complete text [" + browser.getBodyText() + "]");
    }
}
Supertux
+1 for focus on integration tests. I agree that this can be where the nasty defects lurk.
djna
I do use the "waitFor" approach at the moment. However, not all async processing is created equal. In my app, the async processing is handled by Windows message pump (WindowsFormsSynchronizationContext in .NET) and for some reason, it doesn't like how it's being used in testing. Hence, my question to work around if possible. But you're right - we should be testing close to the real thing as much as possible.
Jiho Han
Additionally, my app being a rich client app, does that require/allow showing the UI during integration testing? The async mechanism is closely tied to the UI (well it is Windows message pump and it doesn't work without the UI) I can tweak the test code so that UI is present but not interfering but I'm not sure how that will fly in the face of continuous integration. I'm tempted to replace the message pump based async with something else for the sake of testing but that seems to be an awful lot of work.
Jiho Han