tags:

views:

388

answers:

7

I would like to know if somebody often uses metrics to validate its code/design. As example, I think I will use:

  • number of lines per method (< 20)
  • number of variables per method (< 7)
  • number of paremeters per method (< 8)
  • number of methods per class (< 20)
  • number of field per class (< 20)
  • inheritance tree depth (< 6).
  • Lack of Cohesion in Methods

Most of these metrics are very simple.

What is your policy about this kind of mesure ? Do you use a tool to check their (e.g. NDepend) ?

A: 

OO Metrics are a bit of a pet project for me (It was the subject of my master thesis). So yes I'm using these and I use a tool of my own.

For years the book "Object Oriented Software Metrics" by Mark Lorenz was the best resource for OO metrics. But recently I have seen more resources.

Unfortunately I have other deadlines so no time to work on the tool. But eventually I will be adding new metrics (and new language constructs).

Update We are using the tool now to detect possible problems in the source. Several metrics we added (not all pure OO):

  • use of assert
  • use of magic constants
  • use of comments, in relation to the compelxity of methods
  • statement nesting level
  • class dependency
  • number of public fields in a class
  • relative number of overridden methods
  • use of goto statements

There are still more. We keep the ones that give a good image of the pain spots in the code. So we have direct feedback if these are corrected.

Gamecat
A: 

Hard numbers don't work for every solution. Some solutions are more complex than others. I would start with these as your guidelines and see where your project(s) end up.

But, regarding these number specifically, these numbers seem pretty high. I usually find in my particular coding style that I usually have:

  • no more than 3 parameters per method
  • signature about 5-10 lines per method
  • no more than 3 levels of inheritance

That isn't to say I never go over these generalities, but I usually think more about the code when I do because most of the time I can break things down.

casademora
+3  A: 

Imposing numerical limits on those values (as you seem to imply with the numbers) is, in my opinion, not very good idea. The number of lines in a method could be very large if there is a significant switch statement, and yet the method is still simple and proper. The number of fields in a class can be appropriately very large if the fields are simple. And five levels of inheritance could be way too many, sometimes.

I think it is better to analyze the class cohesion (more is better) and coupling (less is better), but even then I am doubtful of the utility of such metrics. Experience is usually a better guide (though that is, admittedly, expensive).

Jeffrey L Whitledge
+1  A: 

Personally I think it's very difficult to adhere to these types of requirements (i.e. sometimes you just really need a method with more than 20 lines), but in the spirit of your question I'll mention some of the guidelines used in an essay called Object Calisthenics (part of the Thoughtworks Anthology if you're interested).

  • Levels of indentation per method (<2)
  • Number of 'dots' per line (<2)
  • Number of lines per class (<50)
  • Number of classes per package (<10)
  • Number of instance variances per class (<3)

He also advocates not using the 'else' keyword nor any getters or setters, but I think that's a bit overboard.

Ben Hoffstein
A: 

As others have said, keeping to a strict standard is going to be tough. I think one of the most valuable uses of these metrics is to watch how they change as the application evolves. This helps to give you an idea how good a job you're doing on getting the necessary refactoring done as functionality is added, and helps prevent making a big mess :)

AlexCuse
+2  A: 

management by metrics does not work for people or for code; no metrics or absolute values will always work. Please don't let a fascination with metrics distract from truly evaluating the quality of the code. Metrics may appear to tell you important things about the code, but the best they can do is hint at areas to investigate.

That is not to say that metrics are not useful. Metrics are most useful when they are changing, to look for areas that may be changing in unexpected ways. For example, if you suddenly go from 3 levels of inheritance to 15, or 4 parms per method to 12, dig in and figure out why.

example: a stored procedure to update a database table may have as many parameters as the table has columns; an object interface to this procedure may have the same, or it may have one if there is an object to represent the data entity. But the constructor for the data entity may have all of those parameters. So what would the metrics for this tell you? Not much! And if you have enough situations like this in the code base, the target averages will be blown out of the water.

So don't rely on metrics as absolute indicators of anything; there is no substitute for reading/reviewing the code.

Steven A. Lowe
+4  A: 

A metric I didn't see in your list is McCabe's Cyclomatic Complexity. It measures the complexity of a given function, and has a correlation with bugginess. E.g. high complexity scores for a function indicate: 1) It is likely to be a buggy function and 2) It is likely to be hard to fix properly (e.g. fixes will introduce their own bugs).

Ultimately, metrics are best used at a gross level -- like control charts. You look for points above and below the control limits to identify likely special cases, then you look at the details. For example a function with a high cyclomatic complexity may cause you to look at it, only to discover that it is appropriate because it a dispatcher method with a number of cases.

torial