views:

502

answers:

7

I was wondering if anyone has experience in metrics used to measure software quality. I know there are code complexity metrics but I'm wondering if there is a specific way to measure how well it actually performs during it's lifetime. I don't mean runtime performance, but rather a measure of the quality. Any suggested tools that would help gather these are welcome too.

Is there measurements to answer these questions:

  • How easy is it to change/enhance the software, robustness
  • If it is a common/general enough piece of software, how reusable is it
  • How many defects were associated with the code
  • Has this needed to be redesigned/recoded
  • How long has this code been around
  • Do developers like how the code is designed and implemented

Seems like most of this would need to be closely tied with a CM and bug reporting tool.

+1  A: 

There is a good thread from the old Joel on Software Discussion groups about this.

NoahD
A: 

I know that some SVN stat programs provide an overview over changed lines per submit. If you have a bugtracking system and persons fixing bugs adding features etc are stating their commit number when the bug is fixed you can then calculate how many line were affected by each bug/new feature request. This could give you a measurement of changeability.

The next thing is simply count the number of bugs found and set them in ratio to the number of code lines. There are some values how many bugs a high quality software should have per codeline.

Janusz
A: 

You could do it in some economic way or in programmer's way.

In case of economic way you mesaure costs of improving code, fixing bugs, adding new features and so on. If you choose the second way, you may want to measure how much staff works with your program and how easy it is to, say, find and fix an average bug in human hours. Certainly they are not flawless, because costs depend on the market situation and human hours depend on the actual people and their skills, so it's better to combine both methods.

This way you get some instruments to mesaure quality of your code. Of course you should take into account the size of your project and other factors, but I hope main idea is clear.

Malcolm
A: 

If measuring code quality in the terms you put it would be such a straightforward job and the metrics accurate, there would probably be no need for Project Managers anymore. Even more, the distinction between good and poor managers would be very small. Because it isn't, that just shows that getting an accurate idea about the quality of your software, is no easy job.

Your questions span to multiple areas that are quantified differently or are very subjective to quantification, so you should group these into categories that correspond to common targets. Then you can assign an "importance" factor to each category and derive some metrics from that.

For instance you could use static code analysis tools for measuring the syntactic quality of your code and derive some metrics from that.

You could also derive metrics from bugs/lines of code using a bug tracking tool integrated with a version control system.

For measuring robustness, reuse and efficiency of the coding process you could evaluate the use of design patterns per feature developed (of course where it makes sense). There's no tool that will help you achieve this, but if you monitor your software growing bigger and put numbers on these it can give you a pretty good idea of how you project is evolving and if it's going in the right direction. Introducing code-review procedures could help you keep track of these easier and possibly address them early in the development process. A number to put on these could be the percentage of features implemented using the appropriate design patterns.

While metrics can be quite abstract and subjective, if you dedicate time to it and always try to improve them, it can give you useful information.

A few things to note about metrics in the software process though:

  1. Unless you do them well, metrics could prove to be more harm than good.
  2. Metrics are difficult to do well.
  3. You should be cautious in using metrics to rate individual performance or offering bonus schemes. Once you do this everyone will try to cheat the system and your metrics will prove worthless.
Mircea Grelus
+1  A: 

If you are using Ruby, there are some tools to help you out with metrics ranging from LOCs/Method and Methods/Class Saikuros Cyclomatic complexity.

My boss actually held a presentation on software metric we use at a ruby conference last year, these are the slides.

A interesting tool that brings you a lot of metrics at once is metric_fu. It checks alot of interesting aspects of your code. Stuff that is highly similar, changes a lot, has a lot of branches. All signs your codes could be better :)

I imagine there are lot more tools like this for other languages too.

Arthur
A: 

A more customer focused metric would be the average time it takes for the software vendor to fix bugs and implement new features.

It is very easy to calculate, based on your bug tracking software's date created, and closed information.

If your average bug fixing/feature implementation time is extremely high, this could also be an indicator for bad software quality.

MartinHN
+1  A: 

some of the quality metrics that might help you can be found in www.sdlcmetrics.org.

Mark Kofman