views:

65

answers:

2

Testing a text-to-speech engine is a rather daunting task. The engine itself parses input and applies rules for pronunciation based on phonetic analysis of individual words. In addition exception lists for the pronounciation rules exist to improve the end result. Projects such as The Guttenberg Project allow you to literally throw the book at the problem; however, the problem remains that I can never feel comfortable due to the domain of the problem. I am after a six nines solution (99.9999% crash proof). Throwing random text at the engine shows clearly that I am only at three nines and subsequent fixes don't appear to be helping. I know what to do in this case (revisit the error handling mechanisms within the engine to make them degrade gracefully). The general issue persists. In any infinite input domain how do you prove software quality?

A: 

How do you test your engine? I would try to use speech-recognition engine (like Microsoft's build-in one) to check the quality. On the volume of the proof I would use text dictionary of all words + texts from some books from different authors.

Artem
+1  A: 

Test for coverage. Make sure that you're hitting all of your branches and all of your loops, exercising all your code and making sure that it works correctly or fails correctly. Depending on how important it is, try to achieve 100% MCDC Coverage (modified condition/decision coverage); for each conditional, determine all the permutations of inputs that factor into the result and make sure you test every permutation.

TALlama

related questions