views:

960

answers:

5

I have a class that processes a 2 xml files and produces a text file.

I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:

  1. For input A and B, generate the output.
  2. Compare the contents of the generated file to the contents expected output
  3. When the actual contents differ from the expected contents, fail and display some useful information about the differences.

Below is the prototype for the class along with my first stab at unit tests.

Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?

Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?


class ReportGenerator
{
    string Generate(string inputPathA, string inputPathB)
    {
        //do stuff
    }
}


[TextFixture]
public class ReportGeneratorTests
{
     static Diff(string pathToExpectedResult, string pathToActualResult)
     {
         using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
         {
             using (StreamReader rs2 = File.OpenText(pathToActualResult))
             {
                 string actualContents = rs2.ReadToEnd();
                 string expectedContents = rs1.ReadToEnd();                  

                 //this works, but the output could be a LOT more useful.
                 Assert.AreEqual(expectedContents, actualContents);
             }
         }
     }

     static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
     {
          ReportGenerator obj = new ReportGenerator();
          string pathToResult = obj.Generate(pathToInputA, pathToInputB);
          Diff(pathToExpectedResult, pathToResult);
     }

     [Test]
     public void TestX()
     {
          TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
     }

     [Test]
     public void TestY()
     {
          TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
     }

     //etc...
}


Update

I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.

A: 

Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...

Line 32 Column 12 - Found 'x' when 'y' was expected

You could optionally enhance that by displaying multiple lines of output

Difference at Line 32 Column 12, first difference shown
A = this is a t**x**st
B = this is a t**e**sts

Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!

Ray Hayes
A: 

I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.

Then, you can write new tests just by adding new text files.

Alan Hensel
The only problem with that is that you only have 1 test. I would be good to know if I broke only 1 of 80 tests or if I broke 70 of 80 tests. Knowledge of which tests broke would be helpful in tracking down problems.
Adam Tegen
So, that depends on how you write the test. In other words, if the diffs don't match, then Assert.Fail("your more informative message here"). Or append to a list of failed tests, and Assert.Fail after the loop ends if the list isn't empty.
Alan Hensel
+4  A: 

As for the multiple tests with different data, use the NUnit RowTest extension:

using NUnit.Framework.Extensions;

[RowTest]
[Row("x1.xml", "x2.xml", "x-expected.xml")]
[Row("y1.xml", "y2.xml", "y-expected.xml")]
public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
 {
      ReportGenerator obj = new ReportGenerator();
      string pathToResult = obj.Generate(pathToInputA, pathToInputB);
      Diff(pathToExpectedResult, pathToResult);
 }
Adam Tegen
A: 

I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.

PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.

David Pokluda
+1  A: 

You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.

Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.

Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:

[Test] public void TestX() { DoTest("X"); }
[Test] public void TestY() { DoTest("Y"); }

Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.

Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".

Ilya Ryzhenkov