views:

63

answers:

3

Hi,

This question is related to PHPUnit, although it should be a global xUnit design question.

I'm writing a Unit test case for a class Image.

One of the methods of this class is setBackgroundColor().

There are 4 different behaviors I need to test for this method,

  1. Trying to set an invalid background color. Multiple invalid parameters will be tested.
  2. Trying to set a valid background color using a short hand RGB array, e.g. array(255,255,255)
  3. Trying to set a valid background color using a standard RGB array, e.g. array('red' => 255, 'green' => 255, 'blue' => 255) (this is the output format of the GD function imagecolorsforindex())
  4. Trying to set a valid background color using the transparent constant IMG_COLOR_TRANSPARENT

At the moment, I have all this contained within 1 test in my test case called testSetBackgroundColor(), however I'm getting the feeling these should be 4 separate tests as the test is getting quite long and doing a lot.

My question is, what should I do here? Do I encapsulate all this into 1 test of the Image test case, or do I split the above into separate tests like,

  • testSetBackgroundColorErrors
  • testSetBackgroundColorShorthandRGB
  • testSetBackgroundColorRGB
  • testSetBackgroundColorTransparent

I've put the test in question here http://pastebin.com/f561fc1ab.

Thank

+4  A: 

Split it. Absolutely.

When a unit test fails it must be immediately clear what exactly is broken. If you combine the tests, you'll be debugging a unit test failure.

By the way, are you writing tests first? With TDD it's unlikely to end up with bloated tests.

Ivan Krechetov
Pretty much my thoughts, just wanted to hear it from someone else. Cheers.
Stephen Melrose
Just saw your last question. I'm doing it the wrong way round with this class, however I'll be employing TDD from now on as I see the benefit.
Stephen Melrose
+3  A: 

My preference is to split the tests as you describe.

  • It makes it more obvious what's gone wrong when a test fails and therefore quicker to debug
  • You get the benefit of a reset of the objects to a clean starting state between test conditions
  • It makes it easier to see which tests you've included/omitted just by looking at the method names
Paolo
Out of interest, would you split the following into two separate tests, one for errors and one for passes, or is it OK as one due to the dataProvider? http://pastebin.com/f5365ed9d
Stephen Melrose
This is ok as is I think. The use of a dataprovider is (AFAIUI) a neat way of automating calling the same test with multiple inputs. It will highlight which set of inputs caused any problem so this satisfies the first two points in my list. You can also see the test set easily so I'd say that covers the last point as well.
Paolo
I was hoping that was the case. Thank you!
Stephen Melrose
A: 

I conceptually split my testing into two categories (as quite a few TDD practitioners do): integration tests and unit tests. A unit test should test one thing, and I should be disciplined about testing the single contract that I'm writing at any given moment -- in general one method needs one test. This forces me to write small, testable methods that I have a high degree of confidence in. Which in turn tends to guide me towards writing small testable classes.

Integration tests are higher-level tests that prove interaction concerns between components that otherwise are proven to work as expected in isolation by unit tests. I write fewer of these, and they have to be applied judiciously, as there can never be full integration-level coverage. These focus on proving out the riskier areas of interaction between various components, and may use written acceptance tests as a guide.

Identifying areas that need integration testing is more of a 'feel' thing. If you've been disciplined about the unit tests, you should have a good idea where integration test needs are, i.e., those areas with deeper call stacks or cross-process interaction or the like where you know there's higher risk. Or, you may decide integration tests are a good way to prove high-level behavioral expectations that map onto the product owner's written requirements. This is a good use as well.

Dave Sims