views:

92

answers:

5

Our team is investigating various options for static analysis in our project, and have mixed opinions about whether we want our Continuous Integration build to fail because of warnings from static analysis.

The argument against failing the build is that there are often exceptions to the rules, and attempting to work around them just to make the build succeed reduces productivity. A better approach would be to generate reports with the build, and regularly dedicate developer time to addressing the reported issues.

The counter-argument is that it is easy for the technical debt to build up if the bugs are not addressed immediately. Also, if the build fails when a potential bug is introduced, the amount of time required to fix it is reduced.

What are your thoughts?

+1  A: 

Personally I'd rather see the build fail. While some warnings are false positives, warnings can be excluded using a SuppressMessageAttribute using a Justification. When doing this, you are sure that every warning is evaluated by developers and nothing simply slips through.

Steven
+1  A: 

It's probably a good idea to fail the build, but this doesn't have to be an absolutely black and white decision.

Hudson lets you fail a build if a certain threshold of new static analysis faults is exceeded. So you can say "mark the build as unstable if 1 new fault is introduced; mark the build as failed if 5 new faults are introduced".

This is something that's built into the various analysis plugins available for Hudson.

Christopher
A: 

    Tough call, without a good global answer. I’d like to agree with the two previous postings and say yes, but my Second Law of Static Analysis says that defects will congregate in parts of the organization where the software engineering process is most badly broken. A corollary is that engineers who are forced to change their code in a hurry to make the warnings go away, are the ones most likely to introduce new problems when they do so; I’ve seen depressingly many such bugs. I think it’s better software engineering to do the false-positive marking outside the code, as in, e.g., Coverity and Klocwork, and do your enforcement based on that.
    It goes without saying that your main point about tracking such new defects, as loudly as possible, and dedicating time promptly to avoiding technical debt, are excellent ideas.

Flash Sheridan
+1  A: 

I typically make the build fail on static analysis errors (not only the CI build but also the one that runs on developers machine before to commit and I use tools that can be included in the IDE).

If you don't do this, my experience is that errors don't get fixed and actually never will because if you consider errors as cosmetic (or you wouldn't allow the commit, right?), there will always be something more important to do. If there is a justified exception, most tools allow to exclude pieces of code (with things like a custom comment or an exclusion filter).

If you want to use static analysis, do it right, fix the problem when it occurs, don't let an error propagate further into the system. A process that lets this happen is wrong:

Let's make toast American style: you burn, I'll scrape. --W. Edwards Deming.

Pascal Thivent
A: 

In addition to failing on errors, you need a process to investigate the warnings, and to decide whether some of them should become errors.

John Saunders