views:

119

answers:

1

Our group is building a process modelling application that simulates an industrial process. The final output of this process is a set of number representing chemistry and flow rates.

This application is based on some very old software that uses the exact same underlying mathematical model to create the simulation. Thousands of variables are involved in the simulation.

Although each component has been unit tested, we now need to be able to make sure that the data output produced by our software matches that of the old simulation software. I am wondering how best to approach this issue in a formalised and rigorous manner.

The old program works by specifying the input via a text file, so I was thinking we could programatically take each variable, adjust its value in the file (and correspondingly in our new application), then compare the outputs between the new and old application. We do this for every variable in the model.

We know the allowable range for each variable so I suppose a random sample across each variable of a few values is enough to show correctness for that particular variable.

Any thoughts on this approach? Any other ideas?

A: 

The comparison of output of the old and new applications id definitely good idea. This is sometime called back-to-back testing.

Regarding test input samples - get familiarized with following concepts:

oo_olo_oo
Yes I figured that the boundary conditions would have to be tested for all variables. Thanks for the links though, I was not familiar with Equivalence partitioning - this is exactly what we need.
Alex