views:

1062

answers:

7

Recently our company has started measuring the cyclomatic complexity (CC) of the functions in our code on a weekly basis, and reporting which functions have improved or worsened. So we have started paying a lot more attention to the CC of functions.

I've read that CC could be informally calculated as 1 + the number of decision points in a function (e.g. if statement, for loop, select etc), or also the number of paths through a function...

I understand that the easiest way of reducing CC is to use the Extract Method refactoring repeatedly...

There are somethings I am unsure about, e.g. what is the CC of the following code fragments?

1)

for (int i = 0; i < 3; i++)
    Console.WriteLine("Hello");

And

Console.WriteLine("Hello");
Console.WriteLine("Hello");
Console.WriteLine("Hello");

They both do the same thing, but does the first version have a higher CC because of the for statement?

2)

if (condition1)
    if (condition2)
        if (condition 3)
            Console.WriteLine("wibble");

And

if (condition1 && condition2 && condition3)
    Console.WriteLine("wibble");

Assuming the language does short-circuit evaluation, such as C#, then these two code fragments have the same effect... but is the CC of the first fragment higher because it has 3 decision points/if statements?

3)

if (condition1)
{
    Console.WriteLine("one");

    if (condition2)
        Console.WriteLine("one and two");
}

And

if (condition3)
    Console.WriteLine("fizz");

if (condition4)
    Console.WriteLine("buzz");

These two code fragments do different things, but do they have the same CC? Or does the nested if statement in the first fragment have a higher CC? i.e. nested if statements are mentally more complex to understand, but is that reflected in the CC?

+3  A: 

CC is not a panacea for measuring quality. Clearly a repeated statement is not "better" than a loop, even if a loop has a bigger CC. The reason the loop has a bigger CC is that sometimes it might get executed and sometimes it might not, which leads to two different "cases" which should both be tested. In your case the loop will always be executed three times because you use a constant, but CC is not clever enough to detect this.

Same with the chained ifs in example 2 - this structure allows you to have a statment which would be executed if only condition1 and condition2 is true. This is a special case which is not possible in the case using &&. So the if-chain has a bigger potential for special cases even if you dont utilize this in your code.

JacquesB
+5  A: 

After browsing thru the wikipedia entry and on Thomas J. McCabe's original paper, it seems that the items you mentioned above are known problems with the metric.

However, most metrics do have pros and cons. I suppose in a large enough program the CC value could point to possibly complex parts of your code. But that higher CC does not necessarily mean complex.

moogs
Whose original paper are you referring to? Could you provide a link?
RickL
added link and the name (Thomas J McCabe)
moogs
Thanks, I skimmed through the paper, it's quite hard to read though as it's in a precise technical style.
RickL
...I couldn't find where on the Wikipedia or the original paper that it mentioned known problems with the metric, please could you point me in the right direction?
RickL
A: 

[Off topic] If you favor readability over good score in the metrics (Was it J.Spolsky that said, "what's measured, get's done" ? - meaning that metrics are abused more often than not I suppose), it is often better to use a well-named boolean to replace your complex conditional statement.

then

if (condition1 && condition2 && condition3)
    Console.WriteLine("wibble");

become

bool/boolean theWeatherIsFine =  condition1 && condition2 && condition3;

if (theWeatherIsFine)
    Console.WriteLine("wibble");
PW
Yes, I would usually do this kind of thing if it improved the readability. Interestingly, the Refactoring - Improving the design of existing code would recommend performing Extract method on the conditional rather than Extract variable...
RickL
+1  A: 

This is the danger of applying any metric blindly. The CC metric certainly has a lot of merit but as with any other technique for improving code it can't be evaluated divorced from context. Point your management at Casper Jone's discussion of the Lines of Code measurement (wish I could find a link for you). He points out that if Lines of Code is a good measure of productivity then assembler language developers are the most productive developers on earth. Of course they're no more productive than other developers; it just takes them a lot more code to accomplish what higher level languages do with less source code. I mention this, as I say, so you can show your managers how dumb it is to blindly apply metrics without intelligent review of what the metric is telling you.

I would suggest that if they're not, that your management would be wise to use the CC measure as a way of spotting potential hot spots in the code that should be reviewed further. Blindly aiming for the goal of lower CC without any reference to code maintainability or other measures of good coding is just foolish.

Onorio Catenacci
<blockquote>"Measuring a software product by lines of code is like measuring an airplane by how much it weighs."<right>Bill Gates (I believe)</right></blockquote>
6eorge Jetson
+1  A: 

Like all software metrics, CC is not perfect. Used on a big enough code base, it can give you an idea of where might be a problematic zone.

There are two things to keep in mind here:

  1. Big enough code base: In any non trivial project you will have functions that have a really high CC value. So high that it does not matter if in one of your examples, the CC would be 2 or 3. A function with a CC of let's say over 300 is definitely something to analyse. Doesn't matter if the CC is 301 or 302.
  2. Don't forget to use your head. There are methods that need many decision points. Often they can be refactored somehow to have fewer, but sometimes they can't. Do not go with a rule like "Refactor all methods with a CC > xy". Have a look at them and use your brain to decide what to do.

I like the idea of a weekly analysis. In quality control, trend analysis is a very effective tool for indentifying problems during their creation. This is so much better than having to wait until they get so big that they become obvious (see SPC for some details).

Treb
Yes, it is quite good that we can see what has changed over the last week, so often you can see a function on the list that you worked on and remember what your changes were and see why the CC has increased.Unfortunately our thresholds are quite low, and often it is hard to reduce the CC easily.
RickL
Yes, that is the problem I have with fixed rules. They may be ok for 80 or 90 percent of the cases, but that still leaves 10 % where they suck.
Treb
+2  A: 
  1. Yes. Your first example has a decision point and your second does not, so the first has a higher CC.
  2. Yes-maybe, your first example has multiple decision points and thus a higher CC. (See below for explanation.)
  3. Yes-maybe. Obviously they have the same number of decision points, but there are different ways to calculate CC, which means ...

... if your company is measuring CC in a specific way, then you need to become familiar with that method (hopefully they are using tools to do this). There are different ways to calculate CC for different situations (case statements, Boolean operators, etc.), but you should get the same kind of information from the metric no matter what convention you use.

The bigger problem is what others have mentioned, that your company seems to be focusing more on CC than on the code behind it. In general, sure, below 5 is great, below 10 is good, below 20 is okay, 21 to 50 should be a warning sign, and above 50 should be a big warning sign, but those are guides, not absolute rules. You should probably examine the code in a procedure that has a CC above 50 to ensure it isn't just a huge heap of code, but maybe there is a specific reason why the procedure is written that way, and it's not feasible (for any number of reasons) to refactor it.

If you use tools to refactor your code to reduce CC, make sure you understand what the tools are doing, and that they're not simply shifting one problem to another place. Ultimately, you want your code to have few defects, to work properly, and to be relatively easy to maintain. If that code also has a low CC, good for it. If your code meets these criteria and has a CC above 10, maybe it's time to sit down with whatever management you can and defend your code (and perhaps get them to examine their policy).

Dave DuPlantis
A: 

Cyclomatic complexity is analogous to temperature. They are both measurements, and in most cases meaningless without context. If I said the temperature outside was 72 degrees that doesn’t mean much; but if I added the fact that I was at North Pole, the number 72 becomes significant. If someone told me a method has a cyclomatic complexity of 10, I can’t determine if that is good or bad without its context.

When I code review an existing application, I find cyclomatic complexity a useful “starting point” metric. The first thing I check for are methods with a CC > 10. These “>10” methods are not necessarily bad. They just provide me a starting point for reviewing the code.

General rules when considering a CC number:

  • The relationship between CC # and # of tests, should be CC# <= #tests
  • Refactor for CC# only if it increases maintainability
  • CC above 10 often indicates one or more Code Smells
fremis