views:

991

answers:

13

After writing new methods and unit tests for them I usually check the code coverage using the Agitar Eclipse plugin.

But additionally to business logic there's a lot of logging code like:

if(log.isDebugEnabled()) { log.debug("Angela Merkel is REALLY hot!"); }

When Agitar checks this code, it (of course) complains about the fact, that the if-statement never is false (because when testing the log level is "ALL"). Although my functional code is completely covered Agitar says my tests don't cover everything.

This is quite unsatisfactory because I never get 100% coverage and I never know if only logging code is untested.

Do you have any hints or solutions?

+6  A: 

If this is the only thing you aren't covering in your tests then I'm sure you have very high coverage numbers. 100% should never really be a goal.

JC
Of course it's not the only thing which is not tested *g*
Olvagor
+2  A: 

I don't know whether this advice will help with the specific code coverage tool you're using, but if you declare a variable

public static final LOGGING_ENABLED = false;

and then write an if statement like

if( log.LOGGING_ENABLED ) {
    log.debug("Angela Merkel is REALLY hot!");
}

then the Java compiler will actually optimize away the if statement so that it never even ends up in your compiled .class file. So if you then run a unit test with logging turned off, then the if statement doesn't really exist and thus the code coverage tool may not complain.

Eli Courtwright
+2  A: 

Try changing your log level while running the test. Or you could force the log level change in your setup method

You could create the logger in your setup and set it manually or use an config file just for the tests.

However your tool might complain that it always returns true instead of false.

If that happens I would see if your tool can be configured to ignore the if statement.

mugafuga
A: 

Why not do the check for log.isDebugEnabled() in the log.debug(...) method?

Less code, and you can mock out your logger so you won't have this problem any more.

jonnii
Yeah, have to think about that, thanks alot!
Olvagor
The check for isDebugEnabled is intended to optimize for the case in which you might want to log the entire XML dump of a document, an operation that would be expensive if you had to construct it every time and pass it along to log.debug() which would almost always discard it in production.
Joe Liversedge
Yes, I know about this. I understood jonnii that way, that I should extend... oh dammit, you are right... okay, thanks for that comment!
Olvagor
See http://logging.apache.org/log4net/release/sdk/log4net.ILog.IsDebugEnabled.html
Bill Michell
A: 

First, I wouldn't worry about this little bit of non-coverage. Your brain tells you that the code doing what it's supposed to do. Second, if there are cases where the alternate path need to be tested, then think about doing dependency injection into the class and injecting a mock instance of the log that you can control from your unit test so that you can test both paths.

  public class ClassUnderTest
  {

     private ILog log;

     public ClassUnderTest() : this(null) {}

     public ClassUnderTest( ILog log )
     {
        if (log != null)
        {
          this.log = log;
        }
        else
        {
           this.log = MyDefaultLogger();
        }
     }
  }

  public interface ILog
  {
      ... methods ...
  }

  public class FakeLog : ILog
  {
      public LogMode mode;

       ....
  }

  [TestMethod]
  public void TestLogDebug()
  {
       ILog mockLog = new FakeLog();
       mockLog.mode = LogMode.Debug;
       ClassUnderTest target = new ClassUnderTest( mockLog );
       ...  run test
  }

  [TestMethod]
  public void TestLogWarn()
  {
       ILog mockLog = new FakeLog();
       mockLog.mode = LogMode.Warn;
       ClassUnderTest target = new ClassUnderTest( mockLog );
       ...  run test
  }
tvanfosson
+2  A: 

If you already have decent unit tests, a simple code coverage statistic won't tell you much. Even if you get 100%, code coverage is nowhere near a guaranty for bug-free code, your unit-tests will probably be a much better indicator for that.

Daan
+1  A: 

As others have said, this isn't really something to worry about. But if you want to get the number up as high as possible, you could do what I've done in the past: When running the coverage tool, configure your log framework to use a special configuration file that sets the level to DEBUG and make's sure no appenders are actually set (I did the latter part so any command line output of EMMA would not be drowned out by my logging output).

Here's what I did with EMMA and Ant, of course the exact config will change depending on your tool:

Ant runs junit (in my 'tests' target) with a special log4j.properties:

<junit printsummary="on" fork="true" haltonfailure="false"
   failureproperty="tests.failed" showoutput="false">
    <!-- specify a special log4j config file for running the unit tests -->
    <jvmarg value="-Dlog4j.configuration=test/ant-junit-log4j.properties"/>
    ...

And the contents of ant-junit-log4j.properties:

#Special log4j.properties file used for running junit test cases from Ant
#we want to minimize output to the build script

#Note that appender CONSOLE has a threshold set to OFF so that it never actually logs
#Set the level to DEBUG so that all logging lines are hit in code coverage tests
log4j.rootCategory=DEBUG, CONSOLE

log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern= %-5p %c - %m%n
log4j.appender.CONSOLE.threshold = OFF
matt b
+1  A: 

I don't know what coverage tool you are using, but some allow you to mark code as not covered, so that they don't appear as problems. It's basically a way for the developer to tell the tool, "I know better than you do, don't bother me with this line of code".

Ned Batchelder
+1  A: 

One improvement would be to no duplicate this if condition all over your code. Put this logic into a method that recives a message and logs it if logging is enabled and call this when you want to log. This way you would only get one warning (and your code would be more readable)

ljorquera
http://logging.apache.org/log4net/release/sdk/log4net.ILog.IsDebugEnabled.html explains why removing the if statement is, in general, not a good idea. The if IS intentionally repeated inside the log.debug method already...
Bill Michell
@Bill It looks like it's saying you shouldn't do it because of the cost of constructing the log message. That goes away if you have the logger construct the log message. See my other response: http://stackoverflow.com/questions/265363/100-code-coverage-with-logging-code#393165
Dustin
A: 

If logging is an import part of your application's functionality then perhaps you should be testing it. An example of how to do this with log4j is by creating a testing appender which records your logging calls:

public class TestingAppender extends AppenderSkeleton {
Map<String, Integer> expects = new HashMap<String, Integer>();

protected void append(LoggingEvent event) {
    match(event.getRenderedMessage());
}

public void match(String candidate) {
    Integer i = expects.get(candidate);
    if (i != null) {
 i++;
      expects.put(candidate, i);
    }
}

public boolean checkExpectations() {
    Collection<Integer> matchCounts = expects.values();
    for (Integer m : matchCounts) {
 if (m.equals(0)) {
     return false;
 }
    }
    return true;
}

public void addExpectation(String expectedMessage) {
    expects.put(expectedMessage, 0);
}

}

In your unit tests you can configure your logger in the setup method:

    @Before
public void setUp() throws Exception {
    log =  Logger.getLogger(Subject.class);
    ta = new TestingAppender();
    ta.setLayout(new SimpleLayout());
    log.addAppender(ta);
    subject = new Subject();
}

    @Test
public void shouldLogDebug() {
    log.setLevel(Level.DEBUG);
    ta.addExpectation("my logging message");
    subject.writeDebugLevelLog();
    assertTrue(ta.checkExpectations());
}

@Test
public void shouldntLogDebug() {
    log.setLevel(Level.INFO);
    ta.addExpectation("my logging message");
    subject.writeDebugLevelLog();
    assertFalse(ta.checkExpectations());
}

And if it's not worth testing then consider removing the logging lines. There's nothing like excessive logging for obscuring the logic of your nice clean code.

Steven Hale
A: 

My advice is to run each test twice under each test method. For the first test, set the logging level to all (an example...LogManager.getLogger(yourClass).setLevel(Level.ALL);). Then, run the actual test. Set the logging to off (Level.OFF). Then, run the test again.

MetroidFan2002
A: 

You should properly abstract your logging. If you find yourself writing the exact same code in many places, you clearly have an abstraction problem.

I including logging in my base classes (can also be mixed in) in such a way that I can do this:

getLogger().debug("Junk:  %s", someObject);

I then run my tests with debug enabled (but using a null emitter) so it runs the most code.

The log abstraction does the debug checks and doesn't assemble the string if logging is not available.

You can see the code on github should you want to do similar.

Dustin
A: 

Rule of thumb I use when dealing with coverage%: Try to get as close to 100% but never expect to achieve it.

For example how do you think you're going to get 100% coverage out of these two samples without altering the code. Oh and these come from memory so I hope I got them right...

First:

public enum Number {
    ONE, TWO
}

public void someMethod(Number n) {
    switch(n) {
     case ONE:
      doOne();
      break;
     default:
      doTwo();
    }
}

Second:

// this one is rather odd/poor design btw but I've actually seen this once
public class PoliticalParty() {
    public interface Politician extends Person {
     public void lie();
    }

    private class PoliticianImpl implements Person {
     public String getName() { "Richard Nixon" };
     public String lie() { "I am not a crook." };
    }

    public Person getPolitician() {
     return new PoliticianImpl();
    }
}

The first example doesn't get covered because Number.TWO is treated as default behaviour and isn't accessed directly and second doesn't get covered entirely because interfaces aren't covered (at least not by EclEMMA) but since it's inside a public class it is counted as a block of that class which isn't covered so it lower the percentage.

Achieving 100% coverage does mean you would have to do tricks in your code base which sometimes aren't feasible or desired. My personal score for a project has been almost 97% but to be honest it wasn't worth it, I ended up doing a lot of redundant tests which really didn't serve a purpose but only helped me to get that percentage.

Esko