I'm brainstorming an application to handle various data integrity checks. Each individual check could query a number of production tables, evaluate the results and report an error providing data relevant to the check. For example, one check would look for customers with a scheduled payment but no remaining balance; a different check might look for credit card transactions that have been authorized but not settled for more than 3 days. Two completely unrelated checks. The dataset from the first one would contain things like customer number, scheduled payment date, payoff date, etc. The second check would have transaction number, card type, last 4 digits of card, amount, etc.
I would like to store the result datasets in a common schema so I can query for any errors from Check A for a specific customer in the last 3 months. Or, how many times has Check B returned an error for distinct transactions. Other tables would also manage issue resolution and such. The only thing I've come up with so far is a table with ~20 columns, one column relating to the specific check, one for date/time, and the remaining would be some form of varchar capable of holding any type of data. There are any number of reasons why this makes me cringe, but performance ranks pretty high up there. I'm hoping to avoid separate tables for each check, but combining that with a lookup table for secondary functionality may be the only way to go.
For the curious I'm trying to keep this in the Microsoft world (VB.NET and SQL Server), but I'm open to other ideas.