views:

93

answers:

2

I have been evolving an automation and statistics generation program through a series of rapid prototypes to see if the license cost of an API generates a good return on investment. The limited time frame, and my own limited experience has led to a useful tool with no automated quality or correctness tests and metrics.

The program deals with the localization process for PDF documents. Specifically, it generates a report on some of the contents of the files (approx. word count, image count etc.) and has some content extraction and processing features. It is mainly used to reduce the time and cost of evaluating the cost of a PDF localization project.

The application has been now approved for a more formal development process, including a request for a bug tracking system and a preliminary test, release and feedback cycle.

The question then, is how would you go about QA and testing in this kind of application, where the numbers are often a best guess based on some heuristic and the processed output is not always useful due to the horrific construction of the source documents? I plan to add warnings to the report when the numbers are obviously crazy, but what else can be done to guarantee quality?

So far the most sophisticated solution I have is to guarantee the results of some helper methods through assertion testing in the build environment and writing a bunch of traditional user test cases (which I'd prefer to avoid).

How do you test for subjective quality measures?

I am working in C#, but I favour a general best practices answer over anything too framework specific.

+2  A: 

I'm not sure exactly what you're application is doing, but to answer the general question: Build a collection of test cases which represent your range of inputs and see if it can correctly judge those. You can't really get around testing actual input documents.

And then, there's a point where you'll have to accept that there's a limit to what can be accomplished with automated testing. When things get really subjective aesthetics or usability, for instance) you're going to need an actual human to get a useful judgment.

I wish I could give a more helpful answer.

Rik
Looks like we're stuck with doing things the old fashioned way then.
IanGilham
A: 

Try Approval Tests.

Carl Manaster
Looks like it could be useful, but doesn't really apply to my problem in this case.
IanGilham