The way DUnit normally works is you write some published methods, and DUnit runs them as tests. What I want to do is a little different. I want to create tests at run time based on data. I'm trying to test a particular module that processes input files to creates output files. I have a set of test input files with corresponding known good output files. The idea is to dynamically create tests, one for each input file, that process the inputs and check the outputs against the known good ones.
The actual source of data here however isn't important. The difficulty is making DUnit behave in a data-driven way. For the sake of this problem, suppose that the data source were just a random number generator. Here is an example concrete problem that gets to the heart of the difficulty:
Create some test objects (TTestCase or whatever) at runtime, say 10 of them, where each one
- Is named at run time from a randomly generated integer. (By 'name' I mean the name of the test that appears in the test-runner tree.)
- Passes or fails based on a random integer. Pass for even, fail for odd.
From the design of DUnit, it looks like it was designed with enough flexibility in mind to make such things possible. I'm not sure that it is though. I tried to create my own test class by inheriting from TAbstractTest and ITest, but some crucial methods weren't accessible. I also tried inheriting from TTestCase, but that class is closely tied to the idea of running published methods (and the tests are named after the methods, so I couldn't just have a single one called, say, 'go', because then all my tests would be called 'go', and I want all my tests to be individually named).
Or alternatively, is there some alternative to DUnit that could do what I want?