views:

1060

answers:

8

I've recently started using code coverage tools (particularily Emma and EclEmma), and I really like the view that it gives me as to the completeness of my unit tests - and the ability to see what areas of the code my unit tests aren't hitting at all. I currently work in an organization that doesn't do a lot of unit testing, and I plan on really pushing everyone to take on unit testing and code coverage and TDD and hopefully convert the organization.

One issue that I'm unsure of with this subject is exactly how far I should take my code coverage. For example, if I have a class such as this:

//this class is meant as a pseudo-enum - I'm stuck on Java 1.4 for time being
public final class BillingUnit {

    public final static BillingUnit MONTH = new BillingUnit("month");
    public final static BillingUnit YEAR = new BillingUnit("year");

    private String value;

    private BillingUnit(String value) {
     this.value = value;
    }

    public String getValue() {
     return this.value;
    }

    public boolean equals(Object obj) {
     return value.equals(((BillingUnit) obj).getValue());

    }

    public int hashCode() {
     return value.hashCode();
    }
}

I wrote some simple unit tests to make sure that equals() works correctly, that getValue() returns what I expected, etc. But thanks to the visual nature of EclEmma, the hashcode() method shows up as bright red for "not tested".

Is it worthwhile to even bother to test hashCode(), in this example, considering how simple the implementation is? I feel like I would be adding a unit test for this method simply to bump the code coverage % up, and get rid of the glaring red highlight that EclEmma adds to these lines.

Maybe I'm being neurotic and OCD-like, but I find that using something like EclEmma that makes it so easy to see what is untested - the plugin highlights the source code in red, and covered code in green - really makes me want to push to get as many classes 100% green as a I can - even when it doesn't add much of a benefit.

+2  A: 

It may not be too complicated now, but a simple check to verify that it is still working as expected can be very useful later on if the method gets modified.

Since the check should be really easy to write, why not do so? It helps the stats, and also helps give you a check later just in case it breaks.

Also, for TDD, you would like 100% coverage, because then you can be sure (or very close to it) that you're not breaking anything when you refactor.

chills42
+5  A: 

I use code coverage to give me hints on places where I may have an incomplete set of tests. For example, I may write a test for some given functionality, then go develop the code that satisfies that functionality, but in doing so actually write code that does more than it is supposed to -- say it might catch an exception in an alternate case that the test doesn't exercise. When I use the coverage analyzer, I can see that I've introduced code that doesn't have an associated test. It helps me to know when I haven't written enough tests.

On the other hand, coverage analysis can lead to false security. Having all of your code covered does not mean that you have enough tests. You need to think about tests from the perspective of what should the code do and write tests to make sure that it does it. Preferably by writing the test first. Just because your code is completely covered does not mean that the code does what it is supposed to do.

In your example, I would have written the test for hashCode to define what the functionality of the method does, before I wrote the code. Therefore, I would have it covered. That doesn't mean that I always have 100% coverage. I'm not overly zealous about writing tests for simple accessors, for example. I also may not test methods from the parent class where I inherit from a framework, since I don't feel the need to test other people's code.

tvanfosson
+2  A: 

I think it is worthwhile to use a library where you can choose to ignore certain kinds of statements. For example, if you have a lot of:

if(logger.isDebugEnabled()) {
    logger.debug("something");
}

It is useful if you can turn off coverage calculations for those sorts of lines. It can also be (arguably) valid to turn off coverage calculations for trivial getters and setters (those that simply set or return a member variable with no other checks or side-effects). I do however think that if you have overridden equals and hashcode, those should be tested. You added non-trivial functionality, and it should be tested.

Just to be clear, the reason I think the above situations should be excluded from coverage is that:

  • Not testing that functionality is ok. You shouldn't have to run through your entire suite of tests 5 times, with the logging library set to each logging level, just to make sure that all your statements are hit.
  • If you did the above, it would skew your coverage the other way. If 90% of your branches are if(log.isDebugEnabled()), and you test them all but no other branches, it will look like you have 90% branch coverage (good), when in reality, you have 0% non-trivial branch coverage (bad!!).
Chris Marasti-Georg
I think this is a good point. Although to handle this, I did some up my emma ant target to use a log4j configuration with the root logger set to DEBUG, and the appender filters at OFF level (so all log.debug() statements are hit, but not logged anywhere).
matt b
You will still run into issues, because if you check the log level, it will always evaluate to true (or false, depending on the config), so your branch coverage will only be 50%.
Chris Marasti-Georg
But I'm not doing any branches - there is no else for if(log.isDebugEnabled()) - right?
matt b
Most coverage tools will still check that you test each condition. So for that branch, there are 2 possibilities - it can evaluate to true, or it can evaluate to false. If it is always true, you will only have 50% coverage.
Chris Marasti-Georg
A: 

As I've said elsewhere, low code-coverage is a problem, but high code-coverage doesn't mean that you're writing pure gold.

If you weren't concerned about code-coverage, then I'd suggest that you'd need tests for equals() and getValue() - depending on how and where hashCode() is being used (sorry, I'm a C# developer), then you might want to test that.

The most important part is that your tests give you confidence that you've written the right code and that the code works as expected.

David Kemp
+1  A: 

A fellow programmer stuck like me in old Java1.4 ;)

As said in my previous answer, code covered is not code tested. And the conclusion was: from a certain point, the only way to improve code coverage is... to delete code!

Now, regarding hashCode, it is interesting to have it covered in a unit test design to check the expected sorted mechanism is respected (and not for covering one more function)

VonC
+2  A: 

There is a different between code coverage and testing coverage. You should attempt to ensure that your code is adequately tested rather than having 100% code coverage.

Consider the following code:

public float reciprocal (float ex)
{
    return (1.0 / ex) ;
}

If you ran one test that passed in a value of 1.0 then you would get 100% code coverage (branch and statement) with all passes. The code obviously has a defect.

Measuring test coverage is more difficult and one that comes from becoming a better developer and tester.

With respect to hashCode specifically, a truist would say that you should have a seperate unit test for that method. I personally would ensure that it is include in at least one unit-integration test and would not test each accessor/modifier directly. The return on your investment is often too low to justify the effort. This is assuming of course that you havea unit test that ensure you are generating the correct hash code.

Ryan Boucher
The above code only has a defect in it if reciprocal() can be passed a zero. If not, then code is complete. If it can, then you should have written a test that tests that scenario.
quamrana
+3  A: 

Reaching 100% code coverage with meaningful tests may not be worth the climb, and, as mentioned before, 100% Code Coverage doesn't necessarily mean that all the possible conditions in the application have been tested.

As for testing equals, hashCode and some other contract based interfaces, like Comparable and Serializable, I do include those tests. It is important that the equals/hashCode contract is implemented correctly, likewise with equals/Comparable.

See JUnit-Addons, especially

Ken Gentle
A: 

In this particular case, I would say that since you're testing the equals method, you may as well also test that equal objects have equal hashcodes: just add an extra test to all the cases where equals is expected to return true.

That will give you code coverage, and into the bargain it will give you some confidence that hashCode actually satisfies its contract :-)

It's only marginally worthwhile, given that you obviously can trust the hashCode method of String, and you don't expect ever to change this class. But if you're suspicious enough of your equals method to test it at all, then you should be suspicious enough of it to test that it and hashCode remain consistent. And you should always be suspicious of assumptions that you will not in future want to mess about with what you've done before. For instance, if someone comes along and decides to 'optimise' by adding a pointer-equality check, then you may as well have a complete set of tests for them to run their modified code against. Otherwise they'll waste time on the same worry you did - is it OK that I don't have code coverage?

Steve Jessop