tags:

views:

80

answers:

5

Given the following simple example:

class MathObject(object):
    """ A completely superfluous class. """ 
    def add(self, a, b):
        return a + b

    def multiply(self, a, b):
        result = 0
        for _ in range(b):
            result = self.add(result, a)
        return result

Obviously, multiply() calls add() internally. If add fails, multiply() fails too. In a complex enough class it might be really complex to find out why exactly a unit test failed.

How does one unit test methods/objects/parts that have dependencies?

+2  A: 

I usually just let them fail - classes should be simple enough to spot the bad test fast.
But, in complex cases we've used simple naming conventions for tests to make sure a certain order is kept (def test_00_add, def test_01_multiply).

Again, if your classes get big this will be harder to manage, so just don't get them big :)

abyx
A: 

As a general rule, if a class is complex enough that it makes difficult to unit test it, then you probably should split it in simpler related classs.

Also, it is a best practice to design your class dependencies on interfaces, not in concrete classes, in order to be able to use mocks for the unit testing.

Konamiman
+1  A: 

The fact that multiply internally uses add is an implementation detail of multiply. So don't take these things into account explicitly in your tests and "just" write tests to test the functionality of both add and multiply.

If you use TDD to get to your code, your classes shouldn't get so complicated that the problem you seem to have is any real problem.

So in essence, I agree with abyx. ;-)

peSHIr
A: 

It kind of depends on whether or not the sub-method/object/whatever is an internal implementation detail, or a collaborator.

If there is only one correct result for the final result, and it will never change, then it's probably worth testing them together. But, if the behavior of the 'internal' object is actually separate behavior, it's probably best to stick it behind an interface.

I'm not sure if that answer is clear or not....

kyoryu
A: 

An ideal test runner would sort this out.

Kent Beck developed a intelligent test runner that executed test based on failure statistics and execution time.

A sophisticated flow-analysis-based test runner could structure the test execution prioritizing tests that test functionality other code depends on.

Thomas Jung