views:

853

answers:

9

So regards logging from SO and other sites on the Internet the best response seems to be:

void DoSomething() {
    Logger.Log("Doing something!");
    // Code...
}

Now generally you'd avoid static methods but in the case of logging (a special case) this is the easiest and cleaniest route. Within the static class you can easily inject an instance via a config file/framework to give you the same effect as DI.

My problem comes from a unit testing perspective.

In the example code above imagine the point of DoSomething() was to add two numbers together. I'd write my unit tests for this fine. What about the logging?

Would I write a unit test for the logging (yet use a mock instance for the logger itself)? I know if this was the case I would have to write an integration test to prove the logger actually wrote to a log file but I'm not sure.

Following Test Driven Development (which I do) the unit test would be required for me to dictate the interface no?

Any advice?

+3  A: 

I've only ever written a few unit tests for logging. It's a pain, either making the production code messy (due to injecting the logger) or the test smelly (replacing the static logger with a mock). Unlike McWafflestix, I've often not found it to be worth the effort afterwards.

How much do you really want to know whether the logging is working, beyond what you'll see through other (hands-on) testing? You might want to use DI for the occasional class where logging is really important, but otherwise I just wouldn't bother testing logging.

This is assuming the log is of a debug nature - if it's an audit log or something like that (something with a functional requirement) that's a different matter.

Jon Skeet
Well I've been told you never write a line of code without writing a test to prove its worth (TDD) so that's the motivation behind this.
Finglas
@Dockers, probably quoted to death, but here is Kent Beck's take on that. I think that line is more for beginners. Once you know what you are doing, you can indeed skip things, as Jon suggests. http://stackoverflow.com/questions/153234/how-deep-are-your-unit-tests/153565#153565
Yishai
Excellent. I had no idea that was on here, read Kent Becks' message. I'll check out the whole discussion. Cheers.
Finglas
A: 

I'd say it's probably a reasonable thing to write a Unit Test for logging; I've done so before, and it turned out to be more useful than I initially anticipated. Generally, Unit Testing logging gives you good assurance that the logging facility is working at Unit Test time, which can be a nice assurance. I don't find it critical to Unit Test logging, but if you have the inclination, I doubt that it'll be a bad thing, either.

McWafflestix
A: 

Personally I think its overkill to unit test logging statements. But if you really want to do it then a mock logger would work or, if using a framework like log4j, write a custom appender that is used during test runs.

Mark
+2  A: 

Most logging frameworks allow you to provide custom implementation for components. You can use that configuration mechanism to provide your own implementations.


For instance, Java's Log4J allows you to declare custom appenders, which are the components responsible for 'delivering' a LoggingEvent.

A logger can be easily mocked and injected using:

Appender appenderMock = EasyMock.createMock(Appender.class);
/* expect */ appenderMock.doAppend(EasyMock.isA(LoggingEvent.class));
EasyMock.replay(appenderMock);

Logger.getLogger(/* replace with your own */"My logger").addAppender(appenderMock);

EasyMock.verify(appenderMock);

This test only verifies that a logging event is sent, but you can refine it much more using EasyMock.

Robert Munteanu
+8  A: 

Personally, I practice TDD/BDD pretty religiously and I almost never test logging. With some exceptions logging is either a developer convenience or a usability factor, not part of the method's core specification. It also tends to have a MUCH higher rate of change than the actual semantics of the method, so you wind up breaking tests just because you added some more informational logging.

It's probably worthwhile to have some tests that simply exercise the logging subsystem, but for most apps I wouldn't test that each class uses the log in a particular way.

Avdi
That's a fair point. I'm very much aware 100% code coverage means nothing and this generally seems the most pragmatic solution. As you said, whether the logging is working or not it should not effect the business logic.
Finglas
A: 

I usually do not unit test logging statements by asserting on what's logged but I check that the code paths taken by my unit tests cover logging statements just to make sure that I don't get an exception while logging an exception!

Gaël Marziou
+1  A: 

I would divide the logging into three categories:

1) A requirement. Some systems require logging for audit purposes, or to fill some other requirement of the project (such as a logging standard in an app server). Then it is indeed a requirement and deserves unit tests and acceptance tests to the point where you can be confident the requirement is met. So in this case the exact string of the log may be tested for.

2) Problem solving. In case you start getting weird state in QA or production, you want to be able to trace what is going on. In general, I would say that if this is important (say in a heavily threaded application where state can get complicated but can't be reproduced via known steps) then testing that the given state values end up logged can be valuable (so you aren't testing the whole readability of the log, just that certain facts get in there). Even if the class is changed later, that state is still likely to be logged (along with additional state) so the coupling between the test and the logging is reasonable. So in this case, only parts of the logging is tested for (a contains test).

3) A development aid. In many cases I use logging as a more robust form of commenting. You can write a statement like:

 logger.debug("Extract the fifth instance of BLAH from the string " + s);

So that you can document the code and at the same time have a useful artifact if you do ever need to debug what is going on. In that case I would not unit test at all, as the existence or not of a given statement is not important on its own.

As for the view that you have to test 100% of everything, see Kent Beck's answer here. I think that the "test everything" is good advice for beginners, because when you start with TDD, the temptation will be to not test anything that is hard to test, or that pushes you to think about the design to make it testable, and rationalize it as unimportant. But once you do know what you are doing, and appreciate the value of the tests, then it is important to balance out what you are doing with what is worth testing.

Yishai
A: 

I would probably have a separate body of unit tests for the logger itself, to test out its various functions separately from everything else. In methods that are using the logger, I would just test that the logger was invoked (i.e. expect that it was called) with the right parameters. For example, if I have a method:

DoSomething()
{
    if (FatalErrorOccurred)
    {
        Logger.Log("Fatal Error", ErrorLevel.Fatal);
    }
}

I would write a test that shows the logger logged a fatal error message when FatalErrorOccurred was true. I would not, of course, test the contents of the error message itself, as that is very susceptible to change.

Secret Agent Man
A: 

Although I agree with others that I wouldn't apply TDD to logging, I would try to ensure that unit testing covers all code paths that contain logging statements. And importantly ensure that the highest verbosity level is configured while running the unit tests, so that all logging statements are executed.

For example, the following code has a bug which will throw a FormatException only if if Debug level tracing is enabled.

if (logger.IsDebugEnabled)
{
    string message = String.Format(CultureInfo.CurrentCulture, 
          "... {1} ...", someValue);
    logger.Debug(message);
}
Joe