views:

258

answers:

4

I have a function to test, under_test, and a set of expected input/output pairs:

[
(2, 332),
(234, 99213),
(9, 3),
# ...
]

I would like each one of these input/output pairs to be tested in its own test_* method. Is that possible?

This is sort of what I want, but forcing every single input/output pair into a single test:

class TestPreReqs(unittest.TestCase):

    def setUp(self):
        self.expected_pairs = [(23, 55), (4, 32)]

    def test_expected(self):
        for exp in self.expected_pairs:
            self.assertEqual(under_test(exp[0]), exp[1])

if __name__ == '__main__':
    unittest.main()

(Also, do I really want to be putting that definition of self.expected_pairs in setUp?)

UPDATE: Trying doublep's advice:

class TestPreReqs(unittest.TestCase):

    def setUp(self):
        expected_pairs = [
                          (2, 3),
                          (42, 11),
                          (3, None),
                          (31, 99),
                         ]

        for k, pair in expected_pairs:
            setattr(TestPreReqs, 'test_expected_%d' % k, create_test(pair))

    def create_test (pair):
        def do_test_expected(self):
            self.assertEqual(get_pre_reqs(pair[0]), pair[1])
        return do_test_expected


if __name__ == '__main__':
    unittest.main()   

This does not work. 0 tests are run. Did I adapt the example incorrectly?

+2  A: 

With nose tests, then yes. See this: http://somethingaboutorange.com/mrl/projects/nose/0.11.1/writing_tests.html#test-generators

Paul Hankin
+2  A: 

Not tested:

class TestPreReqs(unittest.TestCase):
    ...

def create_test (pair):
    def do_test_expected(self):
        self.assertEqual(under_test(pair[0]), pair[1])
    return do_test_expected

for k, pair in enumerate ([(23, 55), (4, 32)]):
    test_method = create_test (pair)
    test_method.__name__ = 'test_expected_%d' % k
    setattr (TestPreReqs, test_method.__name__, test_method)

If you use this often, you could prettify this by using utility functions and/or decorators, I guess. Note that pairs are not an attribute of TestPreReqs object in this example (and so setUp is gone). Rather, they are "hardwired" in a sense to the TestPreReqs class.

doublep
+1. This is a solution that I have been using successfully on a large project to compare a system generating a timetable with it's expected output. In my experience , although a somewhat hackish solution, it works really really well, as you get a test case for every test and will now exactly where your tests are failing.
knutin
This looks interesting but I wasn't able to get it to work. I updated the OP with my attempt.
Rosarch
@Rosarch: I guess the problem is in `do_test_expected`'s name. In edited snippet I change the name dynamically. This should give an added bonus of a more readable output (a guess, I didn't check this).
doublep
The problem with this technique is that tools such as nose that automatically find and run tests will not find the tests, since they do not exist until the code is executed.
Dave Kirby
@Dave Kirby: the code is run at import time so `nose` should find it.
J.F. Sebastian
+2  A: 

nose (suggested by @Paul Hankin)

#!/usr/bin/env python
# file: test_pairs_nose.py
from nose.tools import eq_ as eq

from mymodule import f

def test_pairs(): 
    for input, output in [ (2, 332), (234, 99213), (9, 3), ]:
        yield _test_f, input, output

def _test_f(input, output):
    try:
        eq(f(input), output)
    except AssertionError:
        if input == 9: # expected failure
            from nose.exc import SkipTest
            raise SkipTest("expected failure")
        else:
            raise

if __name__=="__main__":
   import nose; nose.main()

Example:

$ nosetests test_pairs_nose -v
test_pairs_nose.test_pairs(2, 332) ... ok
test_pairs_nose.test_pairs(234, 99213) ... ok
test_pairs_nose.test_pairs(9, 3) ... SKIP: expected failure

----------------------------------------------------------------------
Ran 3 tests in 0.001s

OK (SKIP=1)

unittest (approach similar to @doublep's one)

#!/usr/bin/env python
import unittest2 as unittest
from mymodule import f

def add_tests(generator):
    def class_decorator(cls):
        """Add tests to `cls` generated by `generator()`."""
        for f, input, output in generator():
            test = lambda self, i=input, o=output, f=f: f(self, i, o)
            test.__name__ = "test_%s(%s, %s)" % (f.__name__, input, output)
            setattr(cls, test.__name__, test)
        return cls
    return class_decorator

def _test_pairs():
    def t(self, input, output):
        self.assertEqual(f(input), output)

    for input, output in [ (2, 332), (234, 99213), (9, 3), ]:
        tt = t if input != 9 else unittest.expectedFailure(tt)
        yield tt, input, output

class TestCase(unittest.TestCase):
    pass
TestCase = add_tests(_test_pairs)(TestCase)

if __name__=="__main__":
    unittest.main()

Example:

$ python test_pairs_unit2.py -v
test_t(2, 332) (__main__.TestCase) ... ok
test_t(234, 99213) (__main__.TestCase) ... ok
test_t(9, 3) (__main__.TestCase) ... expected failure

----------------------------------------------------------------------
Ran 3 tests in 0.000s

OK (expected failures=1)

If you don't want to install unittest2 then add:

try:    
    import unittest2 as unittest
except ImportError:
    import unittest
    if not hasattr(unittest, 'expectedFailure'):
       import functools
       def _expectedFailure(func):
           @functools.wraps(func)
           def wrapper(*args, **kwargs):
               try:
                   func(*args, **kwargs)
               except AssertionError:
                   pass
               else:
                   raise AssertionError("UnexpectedSuccess")
           return wrapper
       unittest.expectedFailure = _expectedFailure    
J.F. Sebastian
A: 

I had to do something similar. I created simple TestCase subclasses that took a value in their __init__, like this:

class KnownGood(unittest.TestCase):
    def __init__(self, input, output):
        super(KnownGood, self).__init__()
        self.input = input
        self.output = output
    def runTest(self):
        self.assertEqual(function_to_test(input), output)

I then made a test suite with these values:

def suite():
    suite = unittest.TestSuite()
    suite.addTests(KnownGood(input, output) for input, output in known_values)
    return suite

You can then run it from your main method:

if __name__ == '__main__':
    unittest.TextTestRunner().run(suite())

The advantages of this are:

  • As you add more values, the number of reported tests increases, which makes you feel like you are doing more.
  • Each individual test case can fail individually
  • It's conceptually simple, since each input/output value is converted into one TestCase
Rory