tags:

views:

114

answers:

7

Hi

I just started working for a large company. in a recent internal audit, measuring metrics such as Cyclomatic complexity and file sizes it turned out that several modules including the one owned by my team have a very high index. so in the last week we have been all concentrating on lowering these indexes for our code. by removing decision points and splitting files.

maybe I am missing something being the new guy but, how will this make our software better?, I know that software metrics can measure how good your code is, but dose it work the other way around? will our code become better just because for example we are making a 10000 lines file into 4 2500 lines files?

+5  A: 

It depends how you define "better". Smaller files and less cyclomatic complexity generally makes it easier to maintain. Of course the code itself could still be wrong, and unit tests and other test methods will help with that. It's just a part of making code more maintainable.

blowdart
A: 

Never ever opened one of your own projects after several months again? The larger and more complex the single components are the more one asks oneself, what genious wrote that code and why the heck he wrote it that way. And, there's never too much or even enough documentation. So if the components themself are lesser complex and smaller, its easier to re-understand 'em

LuI
A: 

This is bit Subjective. The idea of assigning a maximim Cyclomatic complexity index is to improve the maintainability and the readability of the code.

As an example in the perspective of the unit testing, it is really convenient to have smaller "units". And avoiding the long codes will help the reader to understand the code. You cannot ensure that the original developer works on the code forever so in the company's perspective it is fair to assign such a criteria to keep the code "simple"

It is easy to write a code that can undertand by a computer. It is more harder to write a code that can understood by a human.

Chathuranga Chandrasekara
+2  A: 

Code is easier to understand and manage in smaller chunks.

It is a good idea to group related bits of code in their own functional areas for improved readability and cohesiveness.

Having a whole large program all in a single file will make your project very difficult to debug, extend, and maintain. I think this is quite obvious.

The particular metric is really only a rule of thumb and should not be followed religiously, but it may indicate something is not as nice as it could be.

Whether legacy working code should be touched and refactored is something that needs to be evaluated. If you decide to do so, you should consider writing tests for it first, that way you'll quickly know whether your changes broke any required behavior.

Alex
+4  A: 

The purpose of metrics is to have more control over your project. They are not a goal on their own, but can help to increase the overall quality and/or to spot design disharmonies. Cyclomatic complexity is just one of them.

Test coverage is another one. It is however well-known that you can get high test coverage and still have a poor test suite, or the opposite, a great test suite that focus on one part of the code. The same happens for cyclomatic complexity. Consider the context of each metrics, and whether there is something to improve.

You should try to avoid accidental complexity, but if the processing has essential complexity, you code will anyway be more complicated. Try then to write mainteanble code with a fair balance between the number of methods and their size.

A great book to look at is "Object-oriented metrics in practice".

ewernli
A: 

how will this make our software better?

Excerpt from the articles Fighting Fabricated Complexity related to the tool for .NET developers NDepend. NDepend is good at helping team to manage large and complex code base. The idea is that code metrics are good are reducing fabricated complexity in the code implementation:


During my interview on Code Metrics by Scott Hanselman’s on Software Metrics, Scott had a particularly relevant remark.

Basically, while I was explaining that long and complex methods are killing quality and should be split into smaller methods, Scott asked me:

looking at this big too complicated method and I break it up into smaller methods, the complexity of the business problem is still there, looking at my application I can say, this is no longer complex from the method perspective, but the software itself, the way it is coupled with other bits of code, may indicate other problem…

Software complexity is a subjective measure relative to the human cognition capacity. Something is complex when it requires effort to be understood by a human. The fact is that software complexity is a 2 dimensional measure. To understand a piece of code one must understand both:

  • what this piece of code is supposed to do at run-time, the behavior of the code, this is the business problem complexity
  • how the actual implementation does achieve the business problem, what was the developer mental state while she wrote the code, this is the implementation complexity.

Business problem complexity lies into the specification of the program and reducing it means working on the behavior of the code itself. On the other hand, we are talking of fabricated complexity when it comes to the complexity of the implementation: it is fabricated in the sense that it can be reduced without altering the behavior of the code.

Patrick Smacchia - NDepend dev
A: 

how will this make our software better?

It can be a trigger for a refactoring, but following one metric doesn't guarantee that all other quality metrics stay the same. And tools are only able to follow very few metrics. You can't measure to which degree code is understandable.

Will our code become better just because for example we are making a 10 000 lines file into 4 2500 lines files?

Not necessarily. Sometimes the larger one can be more understandable, better structured and have lesser bugs.

Most design patterns for example "improve" your code by making it more general and maintenable, but often with the cost of added source lines.

Timo Westkämper