views:

395

answers:

7

I have a large set of NUnit tests; I need to import the results from a given run into a database, then characterize the set of results and present them to the users (email for test failures, web presentation for examining results). I need to be tracking multiple runs over time, as well (for reporting failure rates over time, etc.).

The XML will be the XML generated by nunit-console. I would like to import the XML with a minimum of fuss into some database that can then be used to persist and present results. We will have a number of custom categories that we will need to be able to sort across, as well.

Does anyone know of a database schema that can handle importing this type of data that can be customized to our individual needs? This type of problem seems like it should be common, and so a common solution should exist for it, but I can't seem to find one. If anyone has implemented such a solution before, advice would be appreciated as well.

+4  A: 

It sounds to me like you're actually after a build server such as CruiseControl.NET or TeamCity.

Get the build server to run the tests, and it does the job of telling people what failed, and why.

I recommend TeamCity as it's several orders of magnitude easier to set up.

IainMH
There is a great deal that goes alont with a Build Server. It may be the right answer, but it is not always or obvisouly better than working with results directly. We do not have the budget for TeamCity (although I think I am starting to get through to my boss), and CruiseControl.Net does not fit well in what we do now for builds. I am interested in a solution just using NUnit and a database.
Pat O
Why don't you have the budget for a free piece of software? TeamCity Professional doesn't cost anything.
David M
+2  A: 

I am here looking to solve the same issue. We are currently leaning toward writing an XSLT to transform the XML results into insert statements. Then run the resulting file of insert statements through a command line SQL interpreter. Ideally, I would rather have an NUnit add-in/extention that handles all this for me. Unfortunately I have not been able to find one.

Pat O
+1  A: 

To build off IainMH's answer you may want to take a look at using Trac with BITTEN, it is an open source build system, that can run n-unit tests and report the results. I currently use it for that exact functionality.

Didius
+1  A: 

When using MS SQL, you can import all XMLs to a common column of [xml] datatype. Upon this, xpaths, searching and transformations can be performed.

František Žiačik
+1  A: 

Another Alternative to CruiseControl or TeamCity is Atlassians Bamboo if you're strapped for cash. I'm a huge fan of their software for it's ease of use and they have a deal on where you can get bamboo for 10 bucks.

dcbarans
A: 

Why do you need to have the results in a database? Who is going to use them? The number of failures cannot be large. If it is (repeatedly) your development process is wrong. Fix the process. Eliminate waste (one of the lean principles), don't collect it.

Take smaller steps (shorter iterations, continuous build), eliminate dependencies.

This is is not commonly done, because projects that have these kind of problems don't deliver but get cancelled (eventually).

[edit] Michael, tracking nunit failures over a longer time provides zero value. You need a short feedback loop. Fix the problems now. If you wait till you have accumulated a lot of problems, you are going to be overwhelmed by the noise.

Good problem tracking is done at the right (highest possible abstraction) level. Definitely not unit test.

Stephan Eggermont
Tracking what fails (and when and how often) is key to identifying the root causes of your problems. Without it you waste time trying to solve the wrong process problems. I'd say good problem tracking is _critical_ to a lean development process.
Michael Anderson
Michael, exactly. We're suffering with a legacy codebase with a wildly insufficient test set; what tests we have (it's a large number; but they provide incomplete coverage) have a certain amount of inconsistency in their results. Tracking the inconsistencies, over long periods of time (months) is important to our prioritization.
McWafflestix
A: 

We've hoped to avoid this, but we've generated a database schema from the NUnit result XML schema; it's a bit deficient, however, because NUnit does some (inaccurate and strange) processing to determine some of the critical statistics ("ignored" vs. "not run", for example).

We're still hoping to find a schema / process that is NOT a complete CIT build system which can allow us to customize a database for importing the results, but currently we're using a hand-rolled database which we'll need to do a lot of customizing on to get the desired reporting.

McWafflestix
Would you be willing to share what you have so far? I am coming back to this and considering taking it on personally as an addition to NUnit.
Pat O