tags:

views:

76

answers:

4

When writing unit tests, there are cases where one can create an Assert for each condition that could fail or an Assert that would catch all such conditions. C# Example:

Dictionary<string, string> dict = LoadDictionary();
// Optional Asserts:
Assert.IsNotNull(dict, "LoadDictionary() returned null");
Assert.IsTrue(dict.Count > 0, "Dictionary is empty");
Assert.IsTrue(dict.ContainsKey("ExpectedKey"), "'ExpectedKey' not in dictionary");
// Condition actually interested in testing:
Assert.IsTrue(dict["ExpectedKey"] == "ExpectedValue", "'ExpectedKey' is present but value is not 'ExpectedValue'");

Is there value to a large, multi-person project in this kind of situation to add the "Optional Asserts"? There's more work involved (if you have lots of unit tests) but it will be more immediately clear where the problem lies.

I'm using VS 2010 and the integrated testing tools but intend the question to be generic.

+1  A: 

It is advisable to only have one assert per test, that way, you are clear in your own mind exactly what you are testing.

  • In TDD, this will make it easier to isolate exactly what it is you are about to implement after a failing test.
  • The reports generated by your testing tools will be more accurate.

Each of those "optional" Asserts strike me as being separate tests, but there is no need to assert each of them in each test.

  • Have a test for whether LoadDictionary(); returns null,
  • Have a test for whether LoadDictionary(); returns an empty dictionary
  • etc.

Don't have a single test containing all of these asserts. Definitely don't have several tests containing most of these asserts followed by the actual thing each test is meant to examine.

Paul Butcher
What's the inherent benefit of building hundreds of test cases to test the output of a Dictionary that returns hundreds of key/value pairs?I can tell you one whopping great downside -- you'd end up running 100% of the same code under test hundreds of times. If each call to LoadDictionary() took three seconds, you'd be looking at several minutes instead of a few seconds to test its output!
Warren
That is not what I suggested. What's the benefit of testing the lookup function for every single value you think is in a dictionary? That sounds a lot like testing an arithmetic function with every combination of numbers you think it is likely to be called with.
Paul Butcher
The Dictionary is just a simple example that's domain-neutral to explain what I'm asking.
Eric J.
"It is advisable to only have one assert per test" - I disagree. It is advisable to have _one test case_ per test method, the verification of which may well need multiple asserts.
Péter Török
+1  A: 

Personally, I don't think there's a lot of value to pre-emptively revisiting existing tests to insert the optional Asserts.

If this is an existing project, hopefully all your tests are passing. If that's the case, they'll only start breaking when you refactor and/or change or add something to your code.

I think it'd be easier to deal with the tests that break if and when they break. Sure, you'll be getting a generic failure for your test, but by throwing in a breakpoint (or generally just investigating) you'll pretty easily discover the true source of your failure.

Alternatively, when a test fails, you could add the optional Asserts then to clarify the error. That way you won't be using time upfront to add additional Asserts to tests that aren't going to fail, but you still get the benefit when you do.


If, however, this is a proactive move (as you've suggested) that outlines guidelines for testing, I think it really depends on the test itself and the benefit you receive from the additional Asserts. How much time will you really save by knowing dict is null rather than just missing a key? Ultimately, you should only be testing one thing, so if you start to find a lot of assertions in the one test, there's probably something wrong.

Personally, I don't think a global policy dictating the assertions required is something worth implementing. I think it should be decided on a case-by-case basis. For some tests, like the example you've given, there's probably a lot of value in some of the additional assertions. For something simpler that's unlikely to fail, there's probably not.

Forcing developers to catch and describe every possible discrete failure is a bit negative. Like you're expecting it to fail frequently enough that it's worthwhile saving a few minutes diagnosing it.

Damovisa
I'm considering guidelines for new projects, not revisiting existing ones.
Eric J.
@Eric - Ok, it looked as if this was a reactive move. I'll update.
Damovisa
@Damovisa: I agree that people should be able to decide what asserts make sense for a given test, though without guidelines everyone will make those decisions based on personal preference rather than team preference. I'm leaning toward a guideline similar to what ratkok suggests that leaves the ultimate decision up to the developer.
Eric J.
+1  A: 

What's important is to be able to quickly understand the origin of the failure when the bar goes red.

Let's suppose there's only one assertion:

Assert.IsTrue(dict["ExpectedKey"] == "ExpectedValue",
              "'ExpectedKey' is present but value is not 'ExpectedValue'");

What happens when LoadDictionary() returns null ? If this crashes the entire unit test application as will happen with C/C++, then an assertion guard is necessary. If the failure message clearly indicates that dict was null, then the optional assertions are pointless.

The second question is what happens when the expected key does not appear in the dictionary ? Once again, the error message should make the distinction. If the assertion to test the value associated to the key throws an exception (such as missing key exception), then it's fine : no need to test first that the key exists, and then its value. One test is enough here.

The test if the dictionary is empty is useless as it seems that the objective of this test is to verify that a certain key is present with a certain value.

Last, using some sort of equality assertion would give a more accurate error message:

Assert.Equal("ExpectedValue", dict["ExpectedKey"],
              "Incorrect value for 'ExpectedKey'");

The error should be something like "expected: ExpectedValue, actual: ". Knowing the incorrect value for the key can be helpful.

philippe
+2  A: 

I think that there is a value in doing something like this but you have to be carefuly how to use it. I also work on a large, multi-person project, and recently we have started to use similar approach in our unit testing strategy.

We try to have one test per "execution path", and we have test cases with multiple asserts. However, we use fatal and non-fatal asserts in our test cases, and only non-fatal asserts are used in those test cases that have multiple asserts. Fatal asserts are also used in these test cases (one per TC) to validate condition which if fails there is no point in asserting anything else. This approach helps us to faster localize errors as sometimes more than one assert can occur.

Combining it with custom logs to provide additional information on failures - testing and debugging is much faster and actually more efficient.

However, looking at your example, I am not sure if "multiple/optional asserts" is really good aproach as most likely you do not want to test these basic functionalities over and over again (LoadDict(), not empty etc). I think that in your case, the "test case setup" should ensure that Dictionary is "not empty" and LoadDictionary() performs as expected (already tested with specific TCs). The goal of this test case seems to be validating the lookup method and it should be focused on testing that thing only. Everything else is setup/other functinality and should not belong to this TC, really.

ratkok
@Ratkok: Thanks for your insight. The Dictionary example was chosen as something everyone can readily understand. For the real tests, the multiple asserts would be domain specific tests.
Eric J.