views:

143

answers:

7

Hi all,

I'm working on an automated regression test suite for an app which I maintain. While developing the automated regression test, I ran across some behavior that's almost certainly a bug. So, for now, I've modified the automated regression test to not register a failure--it's deliberately allowing this bad behavior to go by, I mean.

So, I am interested in the opinions of others on this site. Obviously, I'll add a bug to our defect tracking to make sure this error behavior gets fixed. But are there any compelling reasons (either way) to either change the regression test to constantly indicate failure or leave the regression test broken and not have a failure until we can get to fixing the defective behavior? I think of this as a 6 of one and a half-dozen of the other type of question but I ask here because I thought others may see it differently.

+5  A: 

If you stop testing it, how are you going to know when it's fixed, and more importantly, how are you going to know if it gets broken again? I'm against taking out the test, because you're likely to forget to add it back in again.

Paul Tomblin
+1  A: 

I would say "hell yeah!". The simple fact is, is it failing? Yes! Then it should be logged. You are pretty much compromising your testing by allowing a failed test to pass.

One thing that would concern me personally, is that if I did this, and went under a bus, then the "patch" may not get removed, meaning even after a "bugfix" the bug may still remain.

Leave it in, update your project notes, perhaps even move the severity down (if possible), but certainly dont break the thing that is checking for broken things ;)

Rob Cooper
+1  A: 

It should remain a failure if it's not doing what was expected.

Otherwise, it's too easy to ignore. Keep things simple -- it works or it doesn't. Fail or success :)

-- Kevin Fairchild

Kevin Fairchild
A: 

Having a failing test is kind of grating. There's a difference between broken code and unfinished code, and whether the test should be addressed immediately depends on which circumstance this failing test exposes.

If it's broken, you should fix it sooner rather than later. If it's unfinished, deal with it when you have time.

In either case, clearly you can live with it behaving badly (for now), so as long as the issue is logged you might as well not have it nag you about it until you have time to fix it.

jodonnell
+1  A: 

While I agree with most of what Paul said, the other side of the argument would be that regression tests, strictly speaking, are supposed to test for changes in the program's behavior, not just any old bug. They're specifically supposed to tell you when you've broken something that used to work.

I think this boils down to what other sorts of tests are run on this app. If you have some sort of unit test system, maybe that would be a more appropriate place for this test, rather than in the regression test (at least until the bug is fixed). If the regression tests are your only tests, however, I would probably leave the test in place.

Chris Upchurch
+3  A: 

We added a 'snooze' feature to our unit tests. This allowed a test to be annotated with an attribute that basically said 'ignore failures for X weeks from this date'. Developers could annotate a test they knew would not get fixed for a while with this, but it didn't require any intervention in the future to manually re-enable it, the test would simply pop back into the test suite at the designated time.

Rob Walker
A: 

@Paul Tomblin,

Just to be clear--I've never considered removing the test; I was simply considering modifying the pass/fail condition to allow for the failure without it being thrown up in my face every time I run the test.

I'm a little concerned about repeated failures from known causes eventually getting treated like warnings in C++. I know developers who see warnings in their C++ code and simply ignore them because they think they're just useless noise. I'm afraid leaving a known failure in the regression suite might cause people to start ignoring other, possibly more important, failures.

BTW, lest I be misunderstood, I consider warnings in C++ to be an important aid in crafting strong code but judging from other C++ developers I've met I think I'm in the minority.

Onorio Catenacci
Failing test may be pain in the arse, but it should be. There is a bug. Test must shows you that. If for some reason (and I mean good business reason) it won't be fixed, change the regression test to pass, but create separate test outside the regression suite to keep it documented. But do that only with management informed decision. And don't modify regression tests to pass when there is good reason that there is a bug in application. It is not your decision to decide if given issue is bug or bad design that stays. Don't take that responsibility if it is not yours.
yoosiba