views:

83

answers:

3

I want to be able to split a big test to smaller tests so that when the smaller tests pass they imply that the big test would also pass (so there is no reason to run the original big test). I want to do this because smaller tests usually take less time, less effort and are less fragile. I would like to know if there are test design patterns or verification tools that can help me to achieve this test splitting in a robust way.

I fear that the connection between the smaller tests and the original test is lost when someone changes something in the set of smaller tests. Another fear is that the set of smaller tests doesn't really cover the big test.

An example of what I am aiming at:

//Class under test
class A {

  public void setB(B b){ this.b = b; }

  public Output process(Input i){
    return b.process(doMyProcessing(i));
  }

  private InputFromA doMyProcessing(Input i){ ..  }

  ..

}

//Another class under test
class B {

   public Output process(InputFromA i){ .. }

  ..

}

//The Big Test
@Test
public void theBigTest(){
 A systemUnderTest = createSystemUnderTest(); // <-- expect that this is expensive

 Input i = createInput();

 Output o = systemUnderTest.process(i); // <-- .. or expect that this is expensive

 assertEquals(o, expectedOutput());
}

//The splitted tests

@PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
@Test
public void smallerTest1(){
  // this method is a bit too long but its just an example..
  Input i = createInput();
  InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
  Output expected = expectedOutput();  // this should be the same in both tests and it should be ensured somehow

  B b = mock(B.class);
  when(b.process(x)).thenReturn(expected);

  A classUnderTest = createInstanceOfClassA();
  classUnderTest.setB(b);

  Output o = classUnderTest.process(i);

  assertEquals(o, expected);
  verify(b).process(x);
  verifyNoMoreInteractions(b);
}

@PartlyDefines("theBigTest") // <-- so something like this should come from the tool..
@Test
public void smallerTest2(){
  InputFromA x = expectedInputFromA(); // this should be the same in both tests and it should be ensured somehow
  Output expected = expectedOutput();  // this should be the same in both tests and it should be ensured somehow

  B classUnderTest = createInstanceOfClassB();

  Output o = classUnderTest.process(x);

  assertEquals(o, expected);
}
A: 

All I can suggest is the book xUnit Test Patterns. If there is a solution it should be in there.

Space_C0wb0y
+2  A: 

The first suggestion that I'll make is to re-factor your tests on red (failing). To do so, you'll have to break your production code temporarily. This way, you know the tests are still valid.

One common pattern is to use a separate test fixture per collection of "big" tests. You don't have to stick to the "all tests for one class in one test class" pattern. If a a set of tests are related to each other, but are unrelated to another set of tests, then put them in their own class.

The biggest advantage to using a separate class to hold the individual small tests for the big test is that you can take advantage of setup and tear-down methods. In your case, I would move the lines you have commented with:

// this should be the same in both tests and it should be ensured somehow

to the setup method (in JUnit, a method annotated with @Before). If you have some unusually expensive setup that needs to be done, most xUnit testing frameworks have a way to define a setup method that runs once before all of the tests. In JUnit, this is a public static void method that has the @BeforeClass annotation.

If the test data is immutable, I tend to define the variables as constants.

Putting all this together, you might have something like:

public class TheBigTest {

    // If InputFromA is immutable, it could be declared as a constant
    private InputFromA x;
    // If Output is immutable, it could be declared as a constant
    private Output expected;

    // You could use 
    // @BeforeClass public static void setupExpectations()
    // instead if it is very expensive to setup the data
    @Before
    public void setUpExpectations() throws Exception {
      x = expectedInputFromA();
      expected = expectedOutput();
    }

    @Test
    public void smallerTest1(){
      // this method is a bit too long but its just an example..
      Input i = createInput();

      B b = mock(B.class);
      when(b.process(x)).thenReturn(expected);

      A classUnderTest = createInstanceOfClassA();
      classUnderTest.setB(b);

      Output o = classUnderTest.process(i);

      assertEquals(o, expected);
      verify(b).process(x);
      verifyNoMoreInteractions(b);
    }

    @Test
    public void smallerTest2(){
      B classUnderTest = createInstanceOfClassB();

      Output o = classUnderTest.process(x);

      assertEquals(o, expected);
    }

}
Jim Hurne
+1 Keeping the tests in a same class (and naming the test class as you did) will make it less likely that someone would accidentally brake the connection between the original test and smaller tests. Thanks for this. Still I'm missing some automatic way of knowing that the smaller tests imply that the big test would pass.
mkorpela
@mkorpela, can you elaborate a little on why it's important that the smaller tests imply that the big test would pass? If one of the smaller tests fail, isn't that enough to indicate that a problem exists? In any case, most test runners have an indication of the status for the entire test class. For example, the NUnit and Eclipse JUnit test runners both mark the test class as "green" if all of the tests in the class pass, and "red" if one more tests fail. "TheBigTest" would be marked as passing if all of the smaller tests pass.
Jim Hurne
@Hurme, I can't remove the original test if it can fail in a situation where the smaller tests would pass.
mkorpela
Isn't that just something that gets flushed out during the refactoring? That's why refactoring on red is important. Keep the original test around (and failing) while you pull out the smaller tests. As you pull out a small test, update the production code to make the small test pass, and then pull out another small test. You know you can stop pulling out smaller tests once the big test passes. At that point, you can then delete the original big test.
Jim Hurne
That is a good way to do test splitting but I should brake the tested code in every possible way (a big number) to be really sure that I have all the tests.
mkorpela
You probably want to use a code coverage tool to verify that the new tests cover the system under test.
Gutzofter
A: 

theBigTest is missing the dependency on B. Also smallerTest1 mocks B dependency. In smallerTest2 you should mock InputFromA.

Why did you create a dependency graph like you did?

A takes a B then when A::process Input, you then post process InputFromA in B.

Keep the big test and refactor A and B to change the dependency mapping.

[EDIT] in response to remarks.

@mkorpela, my point is that by looking at the code and their dependencies is how you start to get an idea of how to create smaller tests. A has a dependency on B. In order for it to complete its process() it must use B's process(). Because of this, B has a dependency on A.

Gutzofter
First of all its only a very (too) simple example of what I am aiming at. I'm trying to identify general rules / tools for splitting tests so that the meaning of the original test is still there.theBigTest is missing the dependency on B as the test is on more abstract level (it doesn't need to know that there is B but the cost of not knowing is that the test will take longer to execute).
mkorpela
I understand about simple examples, but my point is that `B` is a null reference when you execute `systemUnderTest.process(i)`. It won't run? It may require you two extract method or class on that behavior. It may require extract method or class to test that behavior.Also, if there is a possibility that the two tests will pass and there is a possibility that the big test will fail, you need to have a third test that tests that failing behavior.
Gutzofter
@Gutzofter, my createSystemUnderTest() method will initialize systemUnderTest correctly in the made up example (assume that B will not be null). It is not the same method as used in the other tests. "if there is a possibility that the two tests will pass and there is a possibility that the big test will fail, you need to have a third test that tests that failing behavior." - the main thing that I'm asking is how to ensure that the smaller tests will cover the original test.
mkorpela