Recently I faced an interview question, "If there is a database having many stored procedures and functions, then how would you design and develop a framework for automation testing?"
How would you answer that question?
Recently I faced an interview question, "If there is a database having many stored procedures and functions, then how would you design and develop a framework for automation testing?"
How would you answer that question?
iterate over the procs and functions and generate code to call them with the defined parameters?
The problem is equivalent to be given a "database with well defined interfaces" and testcases to test those interfaces.
All that needs to be done is to automate the manual process of running a testcase, comparing its results to the ideal one and output the result as "passed" or "failed".
This may be one way to do it:
idealResults.txt
)currentResults.txt
)idealResults.txt
and currentResults.txt
. If the only allowed results for testcases are exact results, things would be simple( A simple perl script will do).Well, I would start by asking what sort of tests: unit tests? system / integration tests ? regression tests ? Is this a one-shot exercise for (say) testing the success of a database upgrade process? Or is it the foundation for moving the database layer into a TDD project?
Depending on the answers to those questions I might then ask "which database", as there are some testing frameworks which have been specifically written for certain flavours of database. Otherwise there are generic products like dbUnit. I certainly wouldn't want to write a framework from scratch. That time would be better spent writing tests.
I would want to know about the nature of the stored procedures: how many of them are table APIs, how many of them are transaction APIs and how many of them are utilities (i.e. don't execute DML)? That then leads on to discussions of test data.