views:

57

answers:

4

I'm wondering if there is a test framework that allows for tests to be declared as being dependent on other tests. This would imply that they should not be run, or that their results should not be prominently displayed, if the tests that they depend on do not pass.

The point of such a setup would be to allow the root cause to be more readily determined in a situation where there are many test failures.

As a bonus, it would be great if there some way to use an object created with one test as a fixture for other tests.

Is this feature set provided by any of the Python testing frameworks? Or would such an approach be antithetical to unit testing's underlying philosophy?

+3  A: 

Or would such an approach be antithetical to unit testing's underlying philosophy?

Yep...if it is a unit test, it should be able to run on its own. Anytime I have found someone wanting to create dependencies on tests was due to the code being structured in a poor manner. I am not saying this is the instance in your case but it can often be a sign of code smell.

Aaron
A: 

It looks like what you need is not to prevent the execution of your dependent tests but to report the results of your unit test in a more structured way that allows you to identify when an error in a test cascades onto other failed tests.

EKI
Sure, yeah.. preventing running of the tests would only really be useful if they take a while to run. Is there some established way to suss out error cascades?
intuited
+1  A: 

This seems to be a recurring question - e.g. #3396055

It most probably isn't a unit-test, because they should be fast (and independent). So running them all isn't a big drag. I can see where this might help in short-circuiting integration/regression runs to save time. If this is a major need for you, I'd tag the setup tests with [Core] or some such attribute.

I then proceed to write a build script which has two tasks

  • Taskn : run all tests in X,Y,Z dlls marked with tag [Core]
  • Taskn+1 depends on Taskn: run all tests in X,Y,Z dlls excluding those marked with tag [Core]

(Taskn+1 shouldn't run if Taskn didn't succeed.) It isn't a perfect solution - e.g. it would just bail out if any one [Core] test failed. But I guess you should be fixing the Core ones instead of proceeding with Non-Core tests.

Gishu
My issue's not so much with speed but with occasionally getting a ton of failed tests that are difficult to sift through. I'm thinking being able to e.g. use a decorator on a test to indicate that it shouldn't run unless some other test method has passed would be useful. Maybe a return value from the identified method could be supplied as a kwarg of the decorated test method. Of course this would require the test framework to build a dependency graph. It might just be that I need to refactor my test suite and/or code in such cases — I think Aaron's advice is quite sage.
intuited
@intuit.. - yeah if you can avoid this via restructuring your test code, do so. My answer was for the case where 'it is too much effort' to do so.. in which case, this might be an acceptance compromise, so that you can distinguish between core and dep test failures.
Gishu
A: 

The test runners py.test, Nosetests and unit2/unittest2 all support the notion of "exiting after the first failure". py.test more generally allows to specify "--maxfail=NUM" to stop running and reporting after NUM failures. This may already help your case especially since maintaining and updating dependencies for tests may not be that interesting a task.

hpk42
Hmm... I think this wouldn't help actually, since the choice of first test would be unrelated to the underlying dependency graph. I.E. I'd get a failure in some higher-level test rather than seeing a failure of the root cause. I'm mostly hoping to find a way to be able to do this for future projects, I'm not sure if it would really be worth it for existing ones.
intuited
i often group my tests such that first come the fine-grained unit-tests and then the higher level tests. py.test runs tests in file-order which helps. I agree that some cross-file depdendency declaration might be useful, though.
hpk42