tags:

views:

682

answers:

6

I'm trying to create some internal metrics to demonstrate (determine?) how well TDD improves defect rates in code.

Is there a better way than defects/KLOC? What about a language's 'functional density'?

Any comments or suggestions would be helpful.

Thanks - Jonathan

+7  A: 

You may also consider mapping defect discovery rate and defect resolution rates... how long does it take to find bugs, and once they're found, how long do they take to fix? To my knowledge, TDD is supposed to improve on fix times because it makes defects known earlier... right?

Matt G.
Up-voted for mentioning making *my* job easier.
Jeff Ober
I'd go so far as to say that fewer defects get out of the development team and into QA, but that's what I'm hoping to demonstrate (and quantify).Thanks - Jonathan
jdharley
Let us know the results
Burt
I agree with this. Most defect metrics are too easy to fudge - less defects/KLOC by writing unnecessary code, less defects per time period by just writing less code. I also prefer to focus on rapid correction past defect prevention (to a point...), so the ability to quickly fix an issue, when found, is often more important than preventing the issue in the first place (IMHO).
kyoryu
+3  A: 

Any measure is an arbitrary comparison of defects to code size; so long as the comparison is similar, it should work. E.g., defects/kloc in C to defects/kloc in C. If you changed languages, it would affect the metric in any case, since the same program in another language might be less defect-prone.

Jeff Ober
+3  A: 

I suggest to use the ratio between the times :

  1. the time spend fixing bugs
  2. the time spend writing other codes

This seem valid across languages...


It also works if you only have a rough estimation of some big code base. You can still compare it to the new code you are writing, to impress you management ;-)

KLE
+3  A: 

Measuring defects isn't an easy thing. One would like to account for the complexity of the code, but that is incredibly messy and unpleasant. When measuring code quality I recommend:

  1. Measure the current state (what is your defect rate now)
  2. Make a change (peer reviews, training, code guidelines, etc)
  3. Measure the new defect rate (Have things improved?)
  4. Goto 2

If you are going to compare coders make sure you compare coders doing similar work in the same language. Don't compare the coder who works in the deep internals of your most complex calculation engine to the coder who writes the code that stores stuff in the database.

I try to make sure that coders know that the process is being measured not the coders. This helps to improve the quality of the metrics.

Jim Blizard
I find comparing coders to be largely counter to the teamwork principle, so I wouldn't do that anyway.Thanks for the comment.
jdharley
You might be shocked how many "managers" think it's a good idea. If you measure coders individually they will find a way to game the system.
Jim Blizard
+1  A: 

I'm skeptical of all LOC-related measurements, not just because of different relative expressiveness of languages, but because individual programmers will vary enough in the expressiveness of their code as to make this metric "fuzzy" at best.

The things I would measure in the interests of project management are:

  • Number of open defects on the project. There's no single scalar that can tell you where the project is and how close it is to a releasable state, but this is still a handy number to have on hand and watch over time.
  • Defect detection rate. This is not the rate of introduction of new defects into the system, but it's probably the closest proxy you'll find.
  • Defect resolution rate. If this is less than the detection rate, you're falling behind - if it's greater, you're getting ahead.

All of these numbers are more useful if you combine them with severity information. A product with 20 minor bugs may well closer to release than one with 2 crashing bugs. If you're clearing the minor bugs but not the severe ones, you have to get the developers to refocus their attention.

I would track these numbers per project and per developer. The reason for doing them per project should be clear. The per-developer numbers are certainly not the whole picture of an individual contributor's skill or productivity, but can point you to people who might need training or remediation.

You may also wish to tag all the tickets in your defect tracking system by project module as well (especially for larger projects), so that you can tell when critical modules are in a fragile state.

bradheintz
A: 

Why dont you consider defects per use case ? or defects per requirement. We have faced practical issues in arriving at the KLOC.

Vishwa