views:

1277

answers:

7

Does heavy use of unit tests discourage the use of debug asserts? It seems like a debug assert firing in the code under test implies the unit test shouldn't exist or the debug assert shouldn't exist. "There can be only one" seems like a reasonable principle. Is this the common practice? Or do you disable your debug asserts when unit testing, so they can be around for integration testing?

Edit: I updated 'Assert' to debug assert to distinguish an assert in the code under test from the lines in the unit test that check state after the test has run.

Also here is an example that I believe shows the dilema: A unit test passes invalid inputs for a protected function that asserts it's inputs are valid. Should the unit test not exist? It's not a public function. Perhaps checking the inputs would kill perf? Or should the assert not exist? The function is protected not private so it should be checking it's inputs for safety.

+1  A: 

A good unit test setup will have the ability to catch asserts. If an assert is triggered the current test should fail and the next is run.

In our libraries low-level debug functionality such as TTY/ASSERTS have handlers that are called. The default handler will printf/break, but client code can install custom handlers for different behavior.

Our UnitTest framework installs its own handlers that log messages and throw exceptions on asserts. The UnitTest code will then catch these exceptions if they occur and log them as an fail, along with the asserted statement.

You can also include assert testing in your unit test - e.g.

CHECK_ASSERT(someList.getAt(someList.size() + 1); // test passes if an assert occurs

Andrew Grant
+1  A: 

Do you mean C++/Java asserts for "programming by contract" assertions, or CppUnit/JUnit asserts? That last question leads me to believe that it's the former.

Interesting question, because it's my understanding that those asserts are often turned off at runtime when you deploy to production. (Kinda defeats the purpose, but that's another question.)

I'd say that they should be left in your code when you test it. You write tests to ensure that the pre-conditions are being enforced properly. The test should be a "black box"; you should be acting as a client to the class when you test. If you happen to turn them off in production, it doesn't invalidate the tests.

duffymo
By assert in the original question I meant the kind you call "programming by contract". I've attempted to clarify the issue in the questions.
Steve Steiner
+2  A: 

Assertions in your code are (ought to be) statements to the reader that say "this condition should always be true at this point." Done with some discipline, they can be part of ensuring that the code is correct; most people use them as debug print statements. Unit Tests are code that demonstrates that your code correctly performs a particular test case; don'e well, they can both document the reuirements, and raise your confidence that the code is indeed correct.

Get the difference? The program assertions help you make it correct, the unit tests help you develop someone else's confidence that the code is correct.

Charlie Martin
+1  A: 

First to have both Design by Contract assertions and unit tests, your unit testing framework shall be able to catch the assertions. If your unit tests abort because of a DbC abort, then you simply cannot run them. The alternative here is to disable those assertions while running (read compiling) your unit tests.

Since you're testing non-public functions, what is the risk of having a function invoked with invalid argument ? Don't your unit tests cover that risk ? If you write your code following the TDD (Test-Driven Development) technique, they should.

If you really want/need those Dbc-type asserts in your code, then you can remove the unit tests that pass the invalid arguments to the methods having those asserts.

However, Dbc-type asserts can be useful in lower level functions (that is not directly invoked by the unit tests) when you have coarse-grained unit tests.

philippe
+1  A: 

As others have mentioned, Debug asserts are meant for things that should always be true. (The fancy term for this is invariants).

If your unit test is passing in bogus data that is tripping the assert, then you have to ask yourself the question - why is that happening?

  • If the function under test is supposed to deal with the bogus data, then clearly that assert shouldn't be there.
  • If the function is not equipped to deal with that kind of data (as indicated by the assert), then why are you unit testing for it?

The second point is one that quite a few developers seem to fall into. Unit test the heck out of all the things your code is built to deal with, and Assert or throw exceptions for everything else - After all, if your code is NOT build to deal with those situations, and you cause them to happen, what do you expect to happen?
You know those parts of the C/C++ documentation that talk about "undefined behaviour"? This is it. Bail and bail hard.

Orion Edwards
A: 

Like the others have mentioned, the Debug.Assert statements should always be true, even if arguements are incorrect, the assertion should be true to stop the app getting into an invalid state etc.

Debug.Assert(_counter == somethingElse, "Erk! Out of wack!");

You should not be able to test this (and probably don't want to because there is nothing you can do really!)

I could be way off but I get the impression that perhaps the asserts you may be talking about are better suited as "argument exceptions", e.g.

if (param1 == null)
  throw new ArgumentNullException("param1", "message to user")

That kind of "assertion" in your code is still very testable.

PK :-)

Paul Kohler
+1  A: 

This is a perfectly valid question.

First of all, many people are suggesting that your are using assertions wrongly. I think many debugging experts would disagree. Although it is good practice to check invariants with assertions, assertions shouldn't be limited to state invariants. In fact, many expert debuggers will tell you to assert any conditions that may cause an exception in addition to checking invariants.

For example, consider the following code:

if (param1 == null)
    throw new ArgumentNullException("param1");

That's fine. But when the exception is thrown, the stack gets unwound until something handles the exception (probably some top level default handler). If execution pauses at that point (you may have a modal exception dialog in a Windows app), you have the chance to attach a debugger, but you have probably lost a lot of the information that could have helped you to fix the issue, because most of the stack has been unwound.

Now consider the following:

if (param1 == null)
{
    Debug.Fail("param1 == null");
    throw new ArgumentNullException("param1");
}

Now if the problem occurs, the modal assert dialog pops up. Execution is paused instantaneously. You are free to attach your chosen debugger and investigate exactly what's on the stack and all the state of the system at the exact point of failure. In a release build, you still get an exception.

Now how do we handle your unit tests?

Consider a unit test that tests the code above that includes the assertion. You want to check that the exception is thrown when param1 is null. You expect that particular assertion to fail, but any other assertion failures would indicate that something is wrong. You want to allow particular assertion failures for particular tests.

The way you solve this will depend upon what languages etc. you're using. However, I have some suggestions if you are using .NET (I haven't actually tried this, but I will in the future and update the post):

  1. Check Trace.Listeners. Find any instance of DefaultTraceListener and set AssertUiEnabled to false. This stops the modal dialog from popping up. You could also clear the listeners collection, but you'll get no tracing whatsoever.
  2. Write your own TraceListener which records assertions. How you record assertions is up to you. Recording the failure message may not be good enough, so you may want to walk the stack to find the method the assertion came from and record that too.
  3. Once a test ends, check that the only assertion failures that occured were the ones you were expecting. If any others occurred, fail the test.

For an example of a TraceListener that contains the code to do a stack walk like that, I'd search for SUPERASSERT.NET's SuperAssertListener and check its code. (It's also worth integrating SUPERASSERT.NET if you're really serious about debugging using assertions).

Most unit test frameworks support test setup/teardown methods. You may want to add code to reset the trace listener and to assert that there aren't any unexpected assertion failures in those areas to mimimize duplication and prevent mistakes.

UPDATE:

Here is an example TraceListener that can be used to unit test assertions. You should add an instance to the Trace.Listeners collection. You'll probably also want to provide some easy way that your tests can get hold of the listener.

NOTE: This owes an awful lot to John Robbins' SUPERASSERT.NET.

/// <summary>
/// TraceListener used for trapping assertion failures during unit tests.
/// </summary>
public class DebugAssertUnitTestTraceListener : DefaultTraceListener
{
    /// <summary>
    /// Defines an assertion by the method it failed in and the messages it
    /// provided.
    /// </summary>
    public class Assertion
    {
        /// <summary>
        /// Gets the message provided by the assertion.
        /// </summary>
        public String Message { get; private set; }

        /// <summary>
        /// Gets the detailed message provided by the assertion.
        /// </summary>
        public String DetailedMessage { get; private set; }

        /// <summary>
        /// Gets the name of the method the assertion failed in.
        /// </summary>
        public String MethodName { get; private set; }

        /// <summary>
        /// Creates a new Assertion definition.
        /// </summary>
        /// <param name="message"></param>
        /// <param name="detailedMessage"></param>
        /// <param name="methodName"></param>
        public Assertion(String message, String detailedMessage, String methodName)
        {
            if (methodName == null)
            {
                throw new ArgumentNullException("methodName");
            }

            Message = message;
            DetailedMessage = detailedMessage;
            MethodName = methodName;
        }

        /// <summary>
        /// Gets a string representation of this instance.
        /// </summary>
        /// <returns></returns>
        public override string ToString()
        {
            return String.Format("Message: {0}{1}Detail: {2}{1}Method: {3}{1}",
                Message ?? "<No Message>",
                Environment.NewLine,
                DetailedMessage ?? "<No Detail>",
                MethodName);
        }

        /// <summary>
        /// Tests this object and another object for equality.
        /// </summary>
        /// <param name="obj"></param>
        /// <returns></returns>
        public override bool Equals(object obj)
        {
            var other = obj as Assertion;

            if (other == null)
            {
                return false;
            }

            return
                this.Message == other.Message &&
                this.DetailedMessage == other.DetailedMessage &&
                this.MethodName == other.MethodName;
        }

        /// <summary>
        /// Gets a hash code for this instance.
        /// Calculated as recommended at http://msdn.microsoft.com/en-us/library/system.object.gethashcode.aspx
        /// </summary>
        /// <returns></returns>
        public override int GetHashCode()
        {
            return
                MethodName.GetHashCode() ^
                (DetailedMessage == null ? 0 : DetailedMessage.GetHashCode()) ^
                (Message == null ? 0 : Message.GetHashCode());
        }
    }

    /// <summary>
    /// Records the assertions that failed.
    /// </summary>
    private readonly List<Assertion> assertionFailures;

    /// <summary>
    /// Gets the assertions that failed since the last call to Clear().
    /// </summary>
    public ReadOnlyCollection<Assertion> AssertionFailures { get { return new ReadOnlyCollection<Assertion>(assertionFailures); } }

    /// <summary>
    /// Gets the assertions that are allowed to fail.
    /// </summary>
    public List<Assertion> AllowedFailures { get; private set; }

    /// <summary>
    /// Creates a new instance of this trace listener with the default name
    /// DebugAssertUnitTestTraceListener.
    /// </summary>
    public DebugAssertUnitTestTraceListener() : this("DebugAssertUnitTestListener") { }

    /// <summary>
    /// Creates a new instance of this trace listener with the specified name.
    /// </summary>
    /// <param name="name"></param>
    public DebugAssertUnitTestTraceListener(String name) : base()
    {
        AssertUiEnabled = false;
        Name = name;
        AllowedFailures = new List<Assertion>();
        assertionFailures = new List<Assertion>();
    }

    /// <summary>
    /// Records assertion failures.
    /// </summary>
    /// <param name="message"></param>
    /// <param name="detailMessage"></param>
    public override void Fail(string message, string detailMessage)
    {
        var failure = new Assertion(message, detailMessage, GetAssertionMethodName());

        if (!AllowedFailures.Contains(failure))
        {
            assertionFailures.Add(failure);
        }
    }

    /// <summary>
    /// Records assertion failures.
    /// </summary>
    /// <param name="message"></param>
    public override void Fail(string message)
    {
        Fail(message, null);
    }

    /// <summary>
    /// Gets rid of any assertions that have been recorded.
    /// </summary>
    public void ClearAssertions()
    {
        assertionFailures.Clear();
    }

    /// <summary>
    /// Gets the full name of the method that causes the assertion failure.
    /// 
    /// Credit goes to John Robbins of Wintellect for the code in this method,
    /// which was taken from his excellent SuperAssertTraceListener.
    /// </summary>
    /// <returns></returns>
    private String GetAssertionMethodName()
    {

        StackTrace stk = new StackTrace();
        int i = 0;
        for (; i < stk.FrameCount; i++)
        {
            StackFrame frame = stk.GetFrame(i);
            MethodBase method = frame.GetMethod();
            if (null != method)
            {
                if(method.ReflectedType.ToString().Equals("System.Diagnostics.Debug"))
                {
                    if (method.Name.Equals("Assert") || method.Name.Equals("Fail"))
                    {
                        i++;
                        break;
                    }
                }
            }
        }

        // Now walk the stack but only get the real parts.
        stk = new StackTrace(i, true);

        // Get the fully qualified name of the method that made the assertion.
        StackFrame hitFrame = stk.GetFrame(0);
        StringBuilder sbKey = new StringBuilder();
        sbKey.AppendFormat("{0}.{1}",
                             hitFrame.GetMethod().ReflectedType.FullName,
                             hitFrame.GetMethod().Name);
        return sbKey.ToString();
    }
}

You can add Assertions to the AllowedFailures collection at the start of each test for the assertions you expect.

At the end of every test (hopefully your unit testing framework supports a test teardown method) do:

if (DebugAssertListener.AssertionFailures.Count > 0)
{
    // TODO: Create a message for the failure.
    DebugAssertListener.ClearAssertions();
    DebugAssertListener.AllowedFailures.Clear();
    // TODO: Fail the test using the message created above.
}
Alex Humphrey