tags:

views:

196

answers:

7

I am trying to become more familiar with test driven approaches. One drawback for me is that a major part of my code is generated context for reporting (PDF documents, chart images). There is always a complex designer involved and there is no easy test of correctness. No chance to test just fragments!

Do you know TDD practices for this situation?

+2  A: 

You could try using a web service for your reporting data source and test that, but you are not going to have unit tests for the rendering. This is the exact same problem you have when testing views. Sure, you can use a web testing framework like Sellenium, but you probably won't be practicing true TDD. You'll be creating tests after your code is done.

In short, use common sense. It probably does not make sense to attempt to test the rendering of a report. You can have manual test cases that a tester will have to go through by hand or simply check the reports yourself.

You might also want to check out "How Much Unit Test Coverage Do You Need? - The Testivus Answer"

jcm
+3  A: 

The question I ask myself in these situations is "how do I know I got it right"?

I've written a lot of code in my career, and almost all of it didn't work the first time. Almost every time I've gone back and changed code for a refactoring, feature change, performance, or bug fix, I've broken it again. TDD protects me from myself (thank goodness!).

In the case of generated code, I don't feel compelled to test the code. That is, I trust the code generator. However, I do want to test my inputs to the code generators. Exactly how to do that depends on the situation, but the general approach is to ask myself how I might be getting it wrong, and then to figure out how to verify that I got it right.

Maybe I write an automated test. Maybe I inspect something manually, but that's pretty risky. Maybe something else. It depends on the situation.

Jay Bazuzi
+3  A: 

Some applications or frameworks are just inheritently unit test-unfriendly, and there's really not a lot you can do about it.

I prefer to avoid such frameworks altogether, but if absolutely forced to deal with such issues, it can be helpful to extract all logic into a testable library, leaving only declarative code behind in the framework.

Mark Seemann
+2  A: 

To put a slightly different spin on answers from Mark Seemann and Jay Bazuzi:

Your problem is that the reporting front-end produces a data format whose output you cannot easily inspect in the "verify" part of your tests.

The way to deal with this kind of problem is to:

  1. Have some very high-level integration tests that superficially verify that your back-end code hooks correctly into your front-end code. I usually call those tests "smoke tests", as in "if I turn on the power and it smokes, it's bad".

  2. Find a different way to test your back-end reporting code. Either test an intermediate output data structure, or implement an alternate output front-end that is more test-friendly, HTML, plaintext, whatever.

This similar to the common problem of testing web apps: it is not possible to automatically test that "the page looks right". But it is sufficient to test that the words and numbers in the page data are correct (using a programmatic browser surch as mechanize and a page scraper), and have a few superficial functional tests (with Selenium or Windmill) if the page is critically dependent on Javascript.

ddaa
It was my first idea to combine your „smoke tests“ with a second test run against a database with reference results.Phase 1: There is no entry in the reference database: Initial test; storing the results, (manually) validating themPhase 2: Retest against the reference database.
Dirk
+1  A: 

You could use Acceptance Test driven Development to replace the unit-tests and have validated reports for well known data used as references.

However this kind of test does not give a fine grained diagnostic as unit-tests do, they usually only provide a PASS/FAIL result, and, should the reports change often, the references need to be regenerated and re-validated as well.

philippe
It's not Red/Green but Red/Yellow/Green. But you are right. This maybe hard to practice in the case of often changing reports.
Dirk
+1  A: 

Consider extracting the text from the PDF and checking it. This won't give you formatting, however. Some pdf extraction programs can pull out the images if the charts are in the pdf.

Brian Carlton
That’s considerable. But I think the code to deconstruct and validate PDFs this way has the same complexity than the original report generation. And who tests the test routines?
Dirk
+1  A: 

Faced with this situation, I try two approaches.

  1. The Golden Master approach. Generate the report once, check it yourself, then save it as the "golden master". Write an automated test to compare its output with the golden master, and fail when they differ.

  2. Automate the tests for the data, but check the format manually. I automate checks for the module that generates the report data, but to check the report format, I generate a report with hardcoded values and check the report by hand.

I strongly encourage you not to generate the full report just to check the correctness of the data on the report. When you want to check the report (not the data), then generate the report; when you want to check the data (not the format), then only generate the data.

J. B. Rainsberger