views:

339

answers:

5

How would you "unit test" a report created by some report engine like Crystal Reports or SQL Server Reporting Services?

+2  A: 

The best I can think of, is comparing the results to an expected output.

Maybe some intelligence can be added, but it is not that easy to test these big blocks.

Gamecat
A: 

I agree with Gamecat.

Generate the report from fixed (constant) data, and compare it to the expected output for that data.

After that you might be able to use simple tests such as diff (checking if the files are identical)

Einar
+3  A: 

For testing our own Java-based reporting product, i-net Crystal-Clear, we run a whole slew of test reports once, exporting them to various export formats, make sure the output is as desired, and then continously have these same reports run daily, comparing the results to the original data. Any differences then show up as test failures.

It has worked pretty well for us. Disadvantage of this is any minor differences that might not make any difference show up as test failures until the test data is reset.

Side note: this isn't exactly a unit test but rather an acceptance test. But I don't see how you could truly "unit test" an entire report.

Epaga
A: 

My current idea is to create tests at two levels:

  • Unit tests: Structure the report to enable testing using some ideas for testing a UI, like Humble View. The report itself will be made as dumb as possible. It should consist mostly of just simple field bindings. The data items/objects that act as the source of these bindings can then be unit tested.

  • Acceptence tests: Generate some example reports. Verify them by hand first. Then setup an automated test that does a compare using diff.

+3  A: 

The problem with reports is akin to the problem with GUIs. If the Report/GUI has lot of (misplaced) intelligence it is going to make testing difficult. The solution then is to

  • Separated Presentation : Separate presentation from content (data-access/domain/business rules). In the current context would mean, that you create some sort of ViewModel class that mirrors the content of the final report (e.g. if you have order details and line items in your report, this class should have properties for the details and a list of line item objects). The ViewModel is infinitely simpler to test. The last-mile, applying presentation to the content should be relatively trivial (thin UI).
    e.g. if you use xslt to render your reports, you can test the data xml using tools like XmlUnit or string compare. You can reasonable confident in xsl transformations on the data xml for the final report... Also any bugs in here would be trivial to fix.
  • However if you're using third party vendors like Crystal Reports, you have no control / access to hook in to the report generation. In such cases, the best you can do is generate representative/expected output files (e.g. pdfs) called Golden Files. Use this as a read-only resource in your tests to compare the actual output. However this approach is very fragile.. in that any substantial change to the report generation code might render all previous Golden Files incorrect. So they would have to be regenerated. If the cost to benefit ratio of automation is too high, I'd say Go manual with old-school word doc test plans.
Gishu