views:

183

answers:

2

Most programming houses / managers i know of can only define quality in terms of the no of bugs made / resolved in retrospect.

However most good programmers can innately sense quality once they start meddling with the code.(right?)

Has any programming houses that you know of, successfully translated this information into metrics that organizations can measure and track to ensure quality?

I ask since i very often hear rantings from dis-gruntled managers who just cannot put their finger on what quality really is. But some organizations like HoneyWell i hear has lots of numbers to track programmer performance, all of which translates to numbers and can be ticked off during appraisals. Hence my question to the community at large to bring out the stats they know of.

Suggestions about tools that can do a good job of measuring messy codes will help too.

+1  A: 

At one customer site we used the CRAP metric which is defined as:

CRAP(m) = comp(m)^2 * (1 – cov(m)/100)^3 + comp(m)

Where comp(m) is the cyclomatic complexity of a given method and cov(m) is the level of unit test coverage for that method. We used NDepend and NCover to provide the raw information to calculate the metric. It was useful for find particular areas of the code base where attention should be paid. Also rather than specify a particular value as a target, we aimed for improvement over time.

Not perfect by any stretch, but still useful.

Michael Barker
+1  A: 

Just a quick reminder:

Code quality is :

  • not defined by a single criteria : there are several groups of people involved in the code quality: developers, project managers and stakeholders, and they all need to see the code quality represented differently.

  • not defined by one number coming from one formula, but rather by the trend of that number: a "bad" note in itself does not mean anything, especially if it is legacy code, but a bad note which keeps getting worse... that is worrisome ;)

VonC