views:

28

answers:

2

Is there a method for tracking or measuring the cause of bugs which won't result in unintended consequences from development team members? We recently added the ability to assign the cause of a bug in our tracking system. Examples of causes include: bad code, missed code, incomplete requirements, missing requirements, incomplete testing etc. I was not a proponent of this as I could see it leading to unintended behaviors from the dev team. To date this field has been hidden from team members and not actively used.

Now we are in the middle of a project where we have a larger than normal number of bugs and this type of information would be good to have in order to better understand where we went wrong and where we can make improvements in the future (or adjustments now). In order to get good data on the cause of the bugs we would need to open this field up for input by dev and qa team members and I'm worried that will drive bad behaviors. For example people may not want to fix a defect they didn't create because they'll feel it reflects poorly on their performance, or people might waste time arguing over the classification of a defect for similar reasons.

Has anyone found a mechanism to do this type of tracking without driving bad behaviors? Is it possible to expect useful data from team members if we explain to the team the reasoning behind the data (not to drive individual performance metrics, but project success metrics)? Is there another, better way to do this type of thing (a more ad-hoc post mortem or open discussion on the issues perhaps)?

A: 

A lot of version control packages have things like svn blame. This is not a direct metric for tracking a bug, but it can tell you who checked in changes to a release that has a major bug in it.

There's also programs like http://www.bugzilla.org/ that help track things over time.

But as far as really digging into why bugs exist, yes, it's definitely worth looking into, though I can't give a standard metric for collecting that information. There are a number of reasons why a system might be very buggy:

  • Poorly written specs
  • Rushed timelines
  • Low-skill programming
  • Bad morale
  • Lack of beta or QA testing
  • Lack of preparing software so that it is even feasible to beta or QA test
  • Poor ratio of time spent cleaning up bugs vs getting new functionality out
  • Poor ratio of time spent making bug-free enhancements vs getting functionality out
  • An exceeding complex system that is easy to break
  • A changing environment that is outside the code base, such as the machine administration
  • Blame for mistakes affecting programmer compensation or promotion

That's just to name a few... If too many bugs is a big problem, then management and lead programmers and any other stake-holders in the whole process need to sit down and discuss the issue.

eruciform
@stimy: were any of these answers useful for you?
eruciform
I guess my worry is more around what happens when you ask developers to provide this info. Are they reluctant... do they end up fighting over the blame... do they end up not wanting to take on extra defects, because the blame is a negative category. That's the type of thing I'm worried about.
Stimy
@stimy: you're exactly correct. this has to be set up in a nonthreatening way, where bug reports are used to track things to make them easier to fix and thus easier on the programmers; otherwise, it's the same as pointing fingers. if things _are_ tracked that include some kind of blame characteristic, then all stakeholders must be equally liable. i.e. screaming salespeople and poor project managers must be equally treated with preventable bugs and serious lapses in releases or testing.
eruciform
A: 

High bug rates can be a symptom of a schedule which is too rushed or inflexible. Switching to a zero defect approach may help. Fix all bugs before working on new code.

Assigning reasons is a good technique to see if you have a problem area. Typical metrics I have seen and encountered are an even split between:

  • Specification errors (missing, incorrec, etc.)
  • Application bugs (inccorrect code, missing code, bad data, etc.)
  • Incorrect tests / no error (generally incorrect expectations, or specifications not yet implemented)

Reveiwing and verifying the defect causes can be useful.

BillThor

related questions