views:

1089

answers:

6
+3  Q: 

JUnit's @Ignore

I wonder if it is a good practice to use JUnit's @Ignore. And how people are using it?

I came up with the following use case: Let's say I am developing a class and writing a JUnit test for it, which doesn't pass, because I'm not quite done with the class. Is it a good practice to mark it with @Ignore?

I'm a little concerned that we might miss the ignored test case later on or that people start using it to "force" tests to pass CI.

+5  A: 

Thats pretty much fine, I suppose.

The docs says,

Native JUnit 4 test runners should report the number of ignored tests along with the number of tests that ran and the number of tests that failed.

Hence, it means even if you forget to remove that afterwards, you should have been notified about that.

The example given in the docs, is completely resembling your case.

@Ignore("not ready yet")
Adeel Ansari
+1  A: 

Well, if you're not done with the class it's good the test fails. Marking it as @Ignore would mean you will ship a code with unfinished class. And right, maybe you're not using that class yet in any code that gets executed, but someday some other developer might see that class and use it. Then he fails even it should work.

I wouldn't use @Ignore in that case for sure.

kender
I think, its purpose is to separate the two, actual failed and incomplete. So, they don't get mixed up. Because may be someone start thinking of fixing it, when actually its an incomplete, not a buggy, code. Or may be its about personal preferences and what feels natural to the person.
Adeel Ansari
@Adeel Ansari: I agree with @kender. I am no expert on JUnit, but I think this case would be better handled by a @Incomplete or @NotReadyForTesting rather then @Ignore. And that annotation belongs to the Class, not the Test.
Hemal Pandya
@Hemal Pandya exactly, if the implementation is not finished, so should be the class annotated, not the test.
kender
Thanks for the input guys, I've been pondering on this for a while and haven't found a proper solution. I don't want to mix incomplete with failing, because it's not the same. On the other hand I don't want tests to fall under the table. It might be a question of proper tooling.
david
@Kender: And then the test runner can report number of tested but imcomplete classes, a statistic that can be really useful for determining deliverability.
Hemal Pandya
Hemal, your suggested annotations more sounds like post-testing methodology. Where there is no test code written yet, because the actual code is incomplete. In case of pre-testing, means write your test code before the actual code, one might need to annotate the test with @ignore. But again, PP.
Adeel Ansari
PP might sounds weired. Actually, its Personal Preference. There was not many characters left :)
Adeel Ansari
@Adeel Ansari: I have read about but haven't myself tried the pre-testing methodology you mention. It might makes sense there because deliverability criteria would be that there are not ignored tests. Thanks for the clarification.
Hemal Pandya
Check out my blog on pre-testing (tdd) http://dlinsin.blogspot.com/2008/06/nothing-wrong-with-tdd-right.html
david
A: 

I think using @ignore is OK as long as there is -

  1. A good reason why the method cannot be tested in some form and it is documented as such in the code. This should be a special case and warrant a discussion or code review to see if there is any way to test it.
  2. The test is not yet built - this should ideally happen only for legacy code. This should be also subject to code review and tasks should be put to add tests.

That's the rules at least in my mind ;-)

Nikhil
+2  A: 

IMHO, Ignore that should not be used lightly... due to the broken windows effect.

I rarely find myself using this attribute/annotation in xUnit. The only few times I've used them is as a TODO when writing TestCase#1, I see another test case(s) that I missed but which should also be included. Just so that I dont forget it, I write a small test case with a descriptive name and mark it with Ignore. Proceed to complete TestCase#1. But this is all intra-check-in. I never check in tests marked with Ignore.

However usually I just use a piece of paper - test list to jot down the new test case - which is much simpler. This also caters to the scenario where I'm partially done... completed 5 of 10 tests. Instead of checkin in 5 Ignored tests, I'd keep the test-list around and check in 5 passing tests. The assumption is that you'll complete the rest in the next few check-ins before jumping to something new.

Other 'special cases' I can think of is..
When you're waiting for a component from another team/person/vendor (whose interface has been published-agreed to), without which the tests can't run. In this case, you can write the tests and mark it with Ignore("Waiting on X to deliver component Y")

Gishu
In this case it would turn into something else, not a unit test. Do you mean 1 unit developed by two or more persons OR testing multiple units with a single case. Not a good idea, I am afraid.
Adeel Ansari
I meant 'Imagine you are writing tests for ClassA which needs ClassB as a collaborator. ClassB is however developed in parallel by someone else. You could use mocks but you still should test with the real collaborator. Ignore("Till ClassB done")
Gishu
I tend to agree. Thanks for your explanation.
Adeel Ansari
+2  A: 

I think that it a perfectly good way to use it.

Your CI server should be green (or blue in Hudson's case) all the time. Whenever it isn't your first priority is to fix it.

Now, if CI broke because of a bug in the test code (perhaps the test code is naughty and non-deterministic) then you should just ignore the test "@Ignore(This test code is borken, raised defect #123)" and raise the bug in your defect tracker.

You won't ship broken code because whenever you ship, you review all defects and decide if any of them are show stoppers right? A broken test that isn't running, will be considered along with the code / feature it was testing. You ship if and only if you're happy that the code it was testing could also be broken. If it ain't tested, consider it broken.

I'm hoping the junit xml report formatter, used when running tests from ant, will one day include the ignored count (and the reasons) along with pass, fail, and error. Maybe then, CI vendors will include the ignored test counts (if not I may have to write a Hudson plugin...)

floater81
+1 - @Ignore should be used to (temporarily) skip over a test that you're not convinced is valid, rather than a problem with the unit under test itself.
Andrzej Doyle
A: 

I routinely use @Ignore for tests which fail because of a known bug. Once the bug is acknowledged and logged in the bug databases, the test failure serves no purpose, since the bug is already known.

Still it makes sense to keep the test code, because it will be useful again once the bug is fixed. So I mark it to be ignored, with a comment indicating the related bug, and ideally note in the bug report that the test should be reactivated to test the fix.

sleske