views:

573

answers:

10
+9  Q: 

Code metrics

I'm just curious about what kind of code metrics people are using and opinions/experience on the most effective use of code metrics. All of our code, regardless of language, uses the following:

  • Cyclomatic Code Complexity
  • Lines of Code
  • Coupling (has different meanings for OO languages than in procedural and templating languages)

The complexity measure has been the most effective for us in identifying potential maintenance nightmares. To a lesser extent, LOC has also been helpful as a relative measure (if we see a class come in that has 20 more lines than the average class.) Coupling has been less useful, usually most helpful when looking at how many things we might break with a change.

I'm interested in knowing what others are using (if anything) for code metrics and opinions on the metrics listed above.

+2  A: 

My current company doesn't really keep any code metrics, which is not my choice.

My previous employer kept metrics on code complexity and lines of code. They also had limits on the lengths of class files. Classes that got too large had to go through a code review so they could be broken up into smaller more appropriate classes. It was a pain in the butt at times, but it kept what was an extremely large source base (100 - 200 developers were onthe project) fairly maintainable.

Mr. Will
A: 
  • Lint
  • Cyclomatic complexity
  • Unfortunately, little else
John Pirie
+2  A: 

We currently do test coverage with EMMA as a maven plugin. It's pretty slick. It will tell you exactly how much of your code is executed by your tests. We use JUnit for testing.

Mike Pone
A: 

I regularly use Dave Wheeler's sloccount which gives LoC and have also tried CCCC, though it is a bit dated now.

BjoernD
A: 

At Bell, we were calculating Function Points on every projects (see IFPUG).

The good : they have build a pretty large database of project's costs.

The bad : it was not working as soon as something new was introduced - and since computer science is evolving very fast, there was almost every time something new...

Conclusion : metrics are pretty good in constant environments. But if you change some parameters, they are somewhat useless.

Sylvain.

Sylvain
A: 

I have been experimenting with the Metrics plugin for Eclipse from StateOfFlow and I am getting to like the idea of having my code quality analysed. Of course, not all the metrics are too clear to me, or useful, but from the wide range of various metrics the plugin provides (currently 14, by my count), I tend to take these seriously:

Method metrics: Cyclomatic complexity | Number of statements | Number of locals in scope | Number of levels

Class metrics: Number of fields | Weighted methods per class

To reduce this list even further, I really believe in McCabe's Cyclomatic Complexity measure and I find the number of statements also a quite useful indication of too much work being done in one place.

Of the rest of the metrics provided by the plugin, I find the ones from the Lack of cohesion in methods group rather difficult to understand. Today, I started with a little experiment of my own and after a couple of hours' coding I turned on the Metrics support for the project. 6/7 problems found were related to cohesion, one particularly surprising: Lack of Cohesion in Methods (Total Correlation) is 209%.

I find it hard to do anything about these: Chidamber and Kemerer | Henderson-Sellers | Total Correlation | Pairwise Field Irrelation. I am very tempted to raise the allowed maxima for these metrics, so they would stop appearing as Warnings.

I think having code metrics calculated on-the-fly provides a helpful guide to writing better code. I am glad you asked this question, as I would like to read more about how the others are using metrics to improve code quality.

By the way, I would welcome any recommendations of other (Eclipse) plugins you might have experience with. The one from StateOfFlow provides a nice way of exporting the metrics information in the form of HTML pages with graphs and tables, and also can export metrics to CSV files which you can then feed into whatever other utilities you may be using. I am enjoying the plugin so far :)

Peter Perháč
Here's a nice summary:http://www.ibm.com/developerworks/java/library/j-ap01117/index.html#N10228His using metrics.sourceforge.org (not eclipse-metrics.sourceforge.org). Both plugins seem to be complementary to each other, but are not the same AFAIK.
+5  A: 

You get what you measure.

Therefore choose your metrics carefully. Measuring the wrong thing gives you the wrong thing. Not all goals can be measured directly so you'll have to settle for a proxy that hopefully correlates with the goal.


On the latest project I completed, I measured the following. These are all not exactly code metrics but higher level project metrics. I think it is still related to this question.

  • daily build failure rate (with root cause analysis for failures), target <20%
  • testing run/pass rates (both automated and manual tests), targets varying per project phase; at end run rate target 100%, pass rate 95%
  • testing function and decision coverage for new code (non-UI, non-legacy, not ported), target 100% for API functions, 80% for other functions, 50% for decisions
  • open error counts by priority, target to see a flat or decreasing curve (team bug fixing capacity is sufficient), no open high-priority errors
  • inspection: component size, inspection effort, issues found by severity; target to get a sort of defects/effort/loc heuristic measure to make sure the components are inspected thoroughly enough
  • static analysis tools like lint and some domain-specific tools run, high priority issues fixed or understood
  • team velocity estimate vs. actual per sprint, target to decrease estimation error to less than 20%

These metrics were derived from higher level goals we had as a project. In a nutshell, shipping a good enough product as early as possible without incurring too much technical debt.

Most of the "code metric" issues were checked informally as part of the inspection. We did have a quite good feel of the system so we knew where the most complex parts requiring most attention were. As programmers, we were also able to detect complexity smells without resorting to formal measures.

laalto
A: 

Essential Cyclometric Complexity is an interesting one as well as it gives an indication of how 'unstructured' the code is.

Unstructured code is the use of breaks and gotos for example to exit control structures such as for loops.

However the only product I know of that gives this metric is McCabe IQ

Tom Carter
+1  A: 

Scott Hanselman recently had did a very good podcast on exactly this. Try here: http://www.hanselminutes.com/default.aspx?showID=181

Troy Hunt
+1  A: 

I find the Type Rank and the Method Rank code metrics to be extremely useful when you need to get a quick overview of the key types/methods in your code. Type and Method Rank is inspired by the famous Google Page rank algorithm.

If you are in a .NET environment, NDepend will calculate the Type Rank and Method Rank for you (as well as 80 other code metrics)

vaucouleur