views:

59

answers:

2

In case of things that are hard to test using traditional xUnit-style methods, such as various converters, XSLT, etc. I often employ technique based on output comparison. The test program produces some output when it's run for the first time. I make sure it's correct and save it for the later use. On the following runs the program compares the new output with the previously saved output and shows any differences. After that I may either fix the program to make the output match again or (and this is important!) I may accept the changes so that the data used for comparison are updated.

Of course, there are other aspects, such as using different preprocessing for comparison and for diffs, e.g. XML is compared using canonical representation, JSON is parsed first, s-expressions are read using lisp reader etc. while pretty-printed represention is used for diffs. The comparison can be re-run using some specified transformations such as removing parts of the output.

I use such techniques both for python where the driver program calls WSGI application using requests defined in a test script and for some Common Lisp programs including a converter from random/broken HTML to a proprietary XML format and a linear accelerator control system where control algorithms produce s-expr based output as they execute using device simulators that produce some output too.

The problem is, I don't know what is exact name for such technique. I know it's used in other places and there is even a testing framework called izh-test that uses something similar. But I've never heard of any specific name for it, including the 'accept changes' part. Data-driven testing? Seems like not quite. Any suggestions?

+3  A: 

ABT, or Adaptive Baseline Testing. You establish a baseline, but have a provision for adapting that baseline depending on test results.

The problem with ABT, is that I completely made it up. I'm not sure if there's a name in wider use for this, but look forward to reading other answers to see if anyone else knows.

Alan
A: 

What you do is Black Box Test.

If you have a Golden Copy which you have validated ("I make sure it's correct and save it for the later use.") and at some point you decide to replace it, you are to validate it again.

  • If you replace it because the new golden copy is better or has more information, then its just an update.
  • If you replace it because the old golden copy was faulty, then your previous validation was no good, you have a SW Test Issue and you may need to rerun other tests that may use this golden copy.
  • If you replace it because the data in the old copy is not good anymore, then something has changed in your program, which means you have a change or fix or something that invalidates your old golden copy and requires you to validate the new one.

In any case, what you should do is validate the new golden copy ("I may accept the changes so that the data used for comparison are updated").

Whatever situation you are in, it still is Black Box Test. You have an input, you get an output, you compare the output against expected results.

EKI