tags:

views:

302

answers:

3

As part of a internal research project, we are trying to collect some metrics from a Bugzilla database; we already have found a tool to help us collect some metrics from it (BugzillaMetrics) but we are now asking ourself what metrics should we collect?

Now, that is why I would like to ask you:

**

What Kind of Metrics about Bugs do you Collect?

**

In our office, the teams are small (2 to 5 developers), we have thought in metrics like bugs per developer, per development sprint, bug per category (GUI, business logic, database) but we would like to hear some other ideas.

Thanks in advance =)

+2  A: 

Bugs per category definitely. I would also do time estimates versus actual time spent. The point of which to give the developer a tool to learn how to make accurate estimates. Estimating time is a notoriously fuzzy process and your best source is experience. With this metric you can gain confidence in the estimates given by everybody.

However mind you still won't be able to just say that Bug X should take Y time because it is similar to Z Bug. But you will able to let Developer Baker look at it and when "It will take 2 days to fix" you have something to judge how accurate he is.

RS Conley
+4  A: 

One of the relevant metric is the number of defects discovered on a time unit (e.g. week, testing iteration, etc.). This might be a good indicator for when it is acceptable to stop testing and fixing. Of course, this metric can consider also the priority of the bugs as well (you might be less interested if there are 10 trivial bugs reported per week than if there are 1-2 major defects per week).

Another metric you might find useful is the mean-time to fix a defect (the time between reporting and fixing/closing the bug).

Cătălin Pitiș
A: 

I suggest following list of metrics:

  • Number of currently open defects in whole product.
  • Metrics for iteration burn-down chart: Number of open bugs/tasks, number of resolved bugs/tasks planned for given iteration
  • Defect detection percentage for each product version. This metric shows the ratio between defects detected during development and QA compared to defects found after QA when version was already release
Mark Kofman