views:

257

answers:

5

I'm working on a quite large project, a few years in the making, at a pretty large company, and I'm taking on the task of driving toward better overall code quality.

I was wondering what kind of metrics you would use to measure quality and complexity in this context. I'm not looking for absolute measures, but a series of items which could be improved over time. Given that this is a bit of a macro-operation across hundreds of projects (I've seen some questions asked about much smaller projects), I'm looking for something more automatable and holistic.

So far, I have a list that looks like this:

  • Code coverage percentage during full-functional tests
  • Recurrance of BVT failures
  • Dependency graph/score, based on some tool like nDepend
  • Number of build warnings
  • Number of FxCop/StyleCop warnings found/supressed
  • Number of "catch" statements
  • Number of manual deployment steps
  • Number of projects
  • Percentage of code/projects that's "dead", as in, not referenced anywhere
  • Number of WTF's during code reviews
  • Total lines of code, maybe broken down by tier
+1  A: 

Maybe you'll find interesting, or insightful, this analysis: A Tale of Four Kernels
Edit: schema, and the corresponding queries

Nick D
tl;drJust kidding. This looks very interesting.
askheaves
Let me mention that this is relevant for C code.
Diomidis Spinellis
+1  A: 

Cyclomatic complexity is a decent "quality" metric. I'm sure developers could find a way to "game" it if it were the only metric, though! :)

And then there's the C.R.A.P. metric...

P.S. NDepend has about ten billion metrics, so that might be worth looking at. See also CodeMetrics for Reflector.

D'oh! I just noticed that you already mentioned NDepend.

Number of reported bugs would be interesting to track, too...

TrueWill
Snap. That CodeMetrics PS may be worth the asking alone.
askheaves
We have a lot of people internally opening and tracking bugs. Tens out of thousands of bugs across my application and others per release. Like I said... LARGE.
askheaves
A: 

Amount of software cloning, less is obviously better.

Ira Baxter
+1  A: 

If your taking on the task of driving toward better overall code quality. You might take a look at:

  • How many open issues do you currently have and how long do they take to resolve?
  • What process to you have in place to gather requirements?
  • Does your staff follow best practices?
  • Do you have sop's defined to describing your companies programming methodology.

When you have a number of developers involved in a large project everyone has their way of programming. Each style of programming solve the problem but some answers may be less efficient than others.

How do you utlize you staff when attacking a new feature or fixing the exist code. Having developers work in teams following programming sop's forces everyone to be a better code.

When your people code more efficiently following rule you development time should get quicker.

You can get all the metrics you want but I say first you have to see how things are being done:

What are you development practices?

Without know how things are currently being done you can get all the metrics you want but you'll never see any improvemenet.

I like this answer because it speaks to the other side of my problem... what standards do I put in place for development? I think I was looking at metrics driving those standards, but we can come at this from both directions.
askheaves
+2  A: 

You should organize your work around the six major software quality characteristics: functionality, reliability, usability, efficiency, maintainability, and portability. I've put a diagram online that describes these characteristics. Then, for each characteristic decide the most important metrics you want and are able to track. For example, some metrics, like those of Chidamber and Kemerer are suitable for object-oriented software, others, like cyclomatic complexity are more general-purpose.

Diomidis Spinellis