We do something like this in order to run what are actually integration (regression) tests within the unittest
framework (actually an in-house customization thereof which gives us enormous benefits such as running the tests in parallel on a cluster of machines, etc, etc -- the great added value of that customization is why we're so keen to use the unittest
framework).
Each test is represented in a file (the parameters to use in that test, followed by the expected results). Our integration_test reads all such files from a directory, parses each of them, and then calls:
def addtestmethod(testcase, uut, testname, parameters, expresults):
def testmethod(self):
results = uut(parameters)
self.assertEqual(expresults, results)
testmethod.name = testname
setattr(testcase, testname, testmethod)
We start with an empty test case class:
class IntegrationTest(unittest.TestCase): pass
and then call addtestmethod(IntegrationTest, ...
in a loop in which we're reading all the relevant files and parsing them to get testname, parameters, and expresults.
Finally, we call our in-house specialized test runner which does the heavy lifting (distributing the tests over available machines in a cluster, collecting results, etc). We didn't want to reinvent that rich-value-added wheel, so we're making a test case as close to a typical "hand-coded" one as needed to "fool" the test runner into working right for us;-).
Unless you have specific reasons (good test runners or the like) to use unittest
's approach for your (integration?) tests, you may find your life is simpler with a different approach. However, this one is quite viable and we're quite happy with its results (which mostly include blazingly-fast runs of large suites of integration/regression tests!-).