views:

401

answers:

5

Hi guys,

I'm making my first steps with unit testing and am unsure about two paradigms which seem to contradict themselves on unit tests, which is:

  • Every single unit test should be self-contained and not depend on others.
  • Don't repeat yourself.

To be more concrete, I've got an importer which I want to test. The Importer has a "Import" function, taking raw data (e.g. out of a CSV) and returning an object of a certain kind which also will be stored into a database through ORM (LinqToSQL in this case).

Now I want to test several things, e.g. that the returned object returned is not null, that it's mandatory fields are not null or empty and that it's attributes got the correct values. I wrote 3 unit tests for this. Should each test import and get the job or does this belong into a general setup-logic? On the other hand, believing this blog post, the latter would be a bad idea as far as my understanding goes. Also, wouldn't this violate the self-containment?

My class looks like this:

[TestFixture]
public class ImportJob
{
    private TransactionScope scope;
    private CsvImporter csvImporter;

    private readonly string[] row = { "" };

    public ImportJob()
    {
     CsvReader reader = new CsvReader(new StreamReader(
                    @"C:\SomePath\unit_test.csv", Encoding.Default),
                    false, ';');
     reader.MissingFieldAction = MissingFieldAction.ReplaceByEmpty;

     int fieldCount = reader.FieldCount;
     row = new string[fieldCount];

     reader.ReadNextRecord();
     reader.CopyCurrentRecordTo(row);
    }

    [SetUp]
    public void SetUp()
    {
     scope = new TransactionScope();
     csvImporter = new CsvImporter();
    }

    [TearDown]
    public void TearDown()
    {
     scope.Dispose();
    }

    [Test]
    public void ImportJob_IsNotNull()
    {
     Job j = csvImporter.ImportJob(row);

     Assert.IsNotNull(j);
    }

    [Test]
    public void ImportJob_MandatoryFields_AreNotNull()
    {
     Job j = csvImporter.ImportJob(row);

     Assert.IsNotNull(j.Customer);
     Assert.IsNotNull(j.DateCreated);
     Assert.IsNotNull(j.OrderNo);
    }

    [Test]
    public void ImportJob_MandatoryFields_AreValid()
    {
     Job j = csvImporter.ImportJob(row);
     Customer c = csvImporter.GetCustomer("01-01234567");

     Assert.AreEqual(j.Customer, c);
     Assert.That(j.DateCreated.Date == DateTime.Now.Date);
     Assert.That(j.OrderNo == row[(int)Csv.RechNmrPruef]);

    }

    // etc. ...
}

As can be seen, I'm doing the line Job j = csvImporter.ImportJob(row); in every unit test, as they should be self-contained. But this does violate the DRY principle and may possibly cause performance issues some day.

What's the best practice in this case?

+4  A: 

Your test classes are no different from usual classes, and should be treated as such: all good practices (DRY, code reuse, etc.) should apply there as well.

Anton Gogolev
+1  A: 

Whether you move

Job j = csvImporter.ImportJob(row);

into the SetUp function or not, it will still be executed before every test is executed. If you have the exact same line at the top of each test, well then it is just logical that you move that line into the SetUp portion.

The blog entry that you posted complained about the setup of the test values being done in a function disconnected (possibly not on the same screen as) from the test itself -- but your case is different, in that the test data is being driven by an external text file, so that complaint doesn't match up with your specific use case either.

scwagner
+1  A: 

You could put the Job j = csvImporter.ImportJob(row); in your setup. That way you're not repeating code.

you actually should run that line of code for each and every test. Otherwise tests will start failing because of things that happened in other tests. This will become hard to maintain.

The performance problem isn't caused by DRY violations. You actually should setup everything for each and every test. These aren't unit tests, they're integration tests, you rely on external files to run the test. You could make ImportJob read from a stream instead of it directly opening a file. Then you could test with a memorystream.

Mendelt
Could you provide a code example how you would do this?
Michael Barth
+2  A: 

That depends on how much of your scenario that's common to your test. In the blog post you refered to the main complaint was that the SetUp method did different setup for the three tests and that can't be considered best practise. In your case you've got the same setup for each test/scenario and then you should use a shared SetUp instead of duplicating the code in each test. If you later on find that there are more tests that does not share this setup or requires a different setup shared between a set of tests then refactor those test to a new test case class. You could also have shared setup methods that's not marked with [SetUp] but gets called in the beginning of each test that needs them:

[Test]
public void SomeTest()
{
   setupSomeSharedState();
   ...
}

A way of finding the right mix could be to start off without a SetUp method and when you find that you're duplicating code for test setup then refactor to a shared method.

henrik
+1  A: 

In one of my projects we agreed with team that we will not implement any initialization logic in unit tests constructors. We have Setup, TestFixtureSetup, SetupFixture (since version 2.4 of NUnit) attributes. They are enough for almost all cases when we need initialization. We force developers to use one of these attributes and to explicitly define whether we will run this initialization code before each test, before all tests in a fixture or before all tests in a namespace.

However I will disagree that unit tests should always confirm to all good practices supposed for a usual development. It is desirable, but it is not a rule. My point is that in real life customer doesn't pay for unit tests. Customer pays for the overall quality and functionality of the product. He is not interested to know whether you provide him a bug-free product by covering 100% of code by unit test/automated GUI tests or by employing 3 manual testers per one developer that will click on every piece of the screen after each build. Unit tests don't add business value to the product, they allow you to save on development and testing efforts and force developers to write better code. So it is always up to you - will you spend additional time on UT refactoring to make unit tests perfect? Or will you spend the same amount of time to add new features for the customers of your product? Do not also forget that unit-tests should be as simple as possible. How to find a golden section?

I suppose this depends on the project, and PM or team lead need to plan and estimate quality of unit tests, their completeness and code coverage as if they estimate all other business features of your product. My opinion, that it is better to have copy-paste unit tests that cover 80% of production code then to have a very well designed and separated unit tests that cover only 20%.

Bogdan_Ch
I've declared the 'row' field readonly, which means I **have** to initialize it in the constructor. I would prefer it to remain readonly as this averts the danger of any test modifying 'row'. How would you solve this?
Michael Barth
You may initialize it when declaring the variable private readonly string[] row = { your values here }; Also you may have no private field at all. Why not to have different local variables row for different test methods with different values (this could increase a code coverage)?
Bogdan_Ch
And pls do not take as a common rule what I said about not using constructors. It was our development team's decision for a particular project. We found it convenient. As to your own unit test, I would say the test is already good enough, and you are now trying to make it "perfect". Unfortunately, in most projects we don't have time to write perfect unit test, we write them only until they are "good enough". For example I would not care about making fields readonly. If I occasionally write a code that will modify a row, and my test will fail, I will run it and find error and fix.
Bogdan_Ch
I see, thanks for the elaboration!
Michael Barth