views:

4067

answers:

25

I would like to track metrics that can be used to improve my team’s software development process, improve time estimates, and detect special case variations that need to be addressed during the project execution.

Please limit each answer to a single metric, describe how to use it, and vote up the good answers.

A: 

Code coverage percentage

Ali Shafai
I would strongly argue against this. Coverage just means you've executed that line, and thus it must compile. It doesn't say either that it's relevant to test or that it's correct.
SCdF
you mean not having it at all is better? at least if you get 10%, you know it's not even executed...
Ali Shafai
I'm saying that when you make code coverage a metric then it's just a hoop that developers jump through. They can they say "See, we have 100% coverage!" when in reality what you want is each distinct piece of logic to have separate unit tests that validate it. That is far more important than coverge
SCdF
+7  A: 

Track how long is takes to do a task that has an estimate against it. If they were well under, question why. If they are well over, question why.

Don't make it a negative thing, it's fine if tasks blow out or were way under estimated. Your goal is to continually improve your estimation process.

SCdF
+6  A: 

Velocity: the number of features per given unit time.

Up to you to determine how you define features, but they should be roughly the same order of magnitude otherwise velocity is less useful. For instance, you may classify your features by stories or use cases. These should be broken down so that they are all roughly the same size. Every iteration, figure out how many stories (use-cases) got implemented (completed). The average number of features/iteration is your velocity. Once you know your velocity based on your feature unit you can use it to help estimate how long it will take to complete new projects based on their features.

[EDIT] Alternatively, you can assign a weight like function points or story points to each story as a measure of complexity, then add up the points for each completed feature and compute velocity in points/iteration.

tvanfosson
Have you had success breaking down features to the same size? I have always liked the velocity idea but have had a hard time getting the features the same size. I have to admit I have bought but not yet read Agile Estimating and Planning and the FDD book...
kchad
You can't "measure" features very accurately. You can use Function Points to score their complexity. The Function Point per Time metric is pretty common.
S.Lott
For my purposes, yes -- sort of. I would say that they are all within about an order of magnitude. I have some stories that will take 2-3 hours and some that will take 2-3 days. Most are in the 2-3 days range, which is what I use for my estimates. I ignore "aspect stories" when estimating.
tvanfosson
+1  A: 

number of similar lines. (copy/pasted code)

Ali Shafai
+2  A: 

number of failing tests or broken builds per commit.

Ali Shafai
+2  A: 

Average function length, or possibly a histogram of function lengths to get a better feel.

The longer a function is, the less obvious its correctness. If the code contains lots of long functions, it's probably a safe bet that there are a few bugs hiding in there.

Simon Howard
+10  A: 

Inverse code coverage

Get a percentage of code not executed during a test. This is similiar to what Shafa mentioned, but the usage is different. If a line of code is ran during testing then we know it might be tested. But if a line of code has not been ran then we know for sure that is has not been tested. Targeting these areas for unit testing will improve quality and takes less time than auditing the code that has been covered. Ideally you can do both, but that never seams to happen.

Rontologist
This is better, but I'm not sure about this either. This is from a Java perspective, but there are lots of things that are of 0 importance to test. Accessors and Mutators would be the prime example, but there are others. How would you deal with those?
SCdF
@SCdF - We do not include any generated code in our code coverage on my team unless someone has a strong opinion about it. Most getters and setters are generated from the IDE for example, and we do not include them in our metrics.
Rontologist
Ahh, fair enough then :)
SCdF
Could you point to any specific tools that do this?
VirtuosiMedia
I have been using EMMA for the projects that I have been on, and targeting classes with the lowest coverage manually.
Rontologist
+3  A: 

interdependency between classes. how tightly your code is coupled.

Ali Shafai
+4  A: 

Track the source and type of bugs that you find.

The bug source represents the phase of development in which the bug was introduced. (eg. specification, design, implementation etc.)

The bug type is the broad style of bug. eg. memory allocation, incorrect conditional.

This should allow you to alter the procedures you follow in that phase of development and to tune your coding style guide to try to eliminate over represented bug types.

Andrew Edgecombe
One of the very few frustrations I have with our agile methodology is that we never review where defects came from. When one developer "finishes" a feature and then spends half of the next two iterations fixing the wreckage left behind, it feels personally demoralizing. Just more time burned.
rektide
@rektide: We have that where I work as well (we are working hard to improve it). It's a deserved slap in the face if we spend all our time fixing wreckage if we don't make an effort to figure out exactly where in the process defects (as you say) come from.
J M
+2  A: 

Track whether a piece of source has undergone review and, if so, what type. And later, track the number of bugs found in reviewed vs. unreviewed code.

This will allow you to determine how effectively your code review process(es) are operating in terms of bugs found.

Andrew Edgecombe
A: 

If you're using Scrum, you want to know how each day's Scrum went. Are people getting done what they said they'd get done?

Personally, I'm bad at it. I chronically run over on my dailies.

S.Lott
+2  A: 

If you're using Scrum, the backlog. How big is it after each sprint? Is it shrinking at a consistent rate? Or is stuff being pushed into the backlog because of (a) stuff that wasn't thought of to begin with ("We need another use case for an audit report that no one thought of, I'll just add it to the backlog.") or (b) not getting stuff done and pushing it into the backlog to meet the date instead of the promised features.

S.Lott
+2  A: 

http://cccc.sourceforge.net/

Fan in and Fan out are my favorites.

Fan in: How many other modules/classes use/know this module

Fan out: How many other modules does this module use/know

Ronny
A: 

improve my team’s software development process

It is important to understand that metrics can do nothing to improve your team’s software development process. All they can be used for is measuring how well you are advancing toward improving your development process in regards to the particular metric you are using. Perhaps I am quibbling over semantics but the way you are expressing it is why most developers hate it. It sounds like you are trying to use metrics to drive a result instead of using metrics to measure the result.

To put it another way, would you rather have 100% code coverage and lousy unit tests or fantastic unit tests and < 80% coverage?

Your answer should be the latter. You could even want the perfect world and have both but you better focus on the unit tests first and let the coverage get there when it does.

Flory
I agree! My intention is to use the metrics as feedback - a way to detect potential problems or potential areas of process that could be improved. I have read that any single metric can be manipulated (and will if used as a measure of performance). I expect the best result from a combo of metrics.
kchad
+3  A: 

"improve my team’s software development process": Defect Find and Fix Rates

This relates to the number of defects or bugs raised against the number of fixes which have been committed or verified.

I'd have to say this is one of the really important metrics because it gives you two things:

  • 1. Code churn. How much code is being changed on a daily/weekly basis (which is important when you are trying to stabilize for a release), and,
  • 2. Shows you whether defects are ahead of fixes or vice-versa. This shows you how well the development team is responding to defects raised by the QA/testers.
  • A low fix rate indicates the team is busy working on other things (features perhaps). If the bug count is high, you might need to get developers to address some of the defects.
    A low find rate indicates either your solution is brilliant and almost bug free, or the QA team have been blocked or have another focus.

    RobS
    I can't believe this had no upvotes, it was my immediate first choice.
    Jeff Atwood
    I was a bit surprised too! This is a key metric IMHO
    RobS
    A: 

    Size and frequency of source control commits.

    dacracot
    Sounds like a sneaky way of implementing a LOC metric.
    JohnFx
    @JohnFx, what if the commits are actually *deleting* code, as they sculpt the simplest, most elegant code possible... (or, say, refactoring).
    John C
    I'm not saying that source control commits are a bad thing. Just that they aren't a good metric of progress. The could just as easily be developmentstruction.
    JohnFx
    +1  A: 

    improve time estimates

    While Joel Spolsky's Evidence-based Scheduling isn't per se a metric, it sounds like exactly what you want. See http://www.joelonsoftware.com/items/2007/10/26.html

    Jonas Kölker
    +8  A: 

    ROI.

    The total amount of revenue brought in by the software minus the total amount of costs to produce the software. Breakdown the costs by percentage of total cost and isolate your poorest performing and most expensive area in terms of return-on-investment. Improve, automate, or eliminate that problem area if possible. Conversely, find your highest return-on-investment area and find ways to amplify its effects even further. If 80% of your ROI comes from 20% of your cost or effort, expand that particular area and minimize the rest by comparison.

    Costs will include payroll, licenses, legal fees, hardware, office equipment, marketing, production, distribution, and support. This can be done on a macro level for a company as whole or a micro level for a team or individual. It can also be applied to time, tasks, and methods in addition to revenue.

    This doesn't mean ignore all the details, but find a way to quantify everything and then concentrate on the areas that yield the best (objective) results.

    VirtuosiMedia
    +1 Although I have to admit that I am AMAZED to see someone think of this!
    Mark Brittingham
    Not a software metric by itself AFAIK. but a good one anyway +1
    SDReyes
    +4  A: 

    Track the number of clones (similar code snippets) in the source code.

    Get rid of clones by refactoring the code as soon as you spot the clones.

    Anonymous
    Check out Simian as a tool for finding duplicate code.
    Ola Eldøy
    +20  A: 
    Mikeage
    A: 

    Most of the aforementioned metrics are interesting but won't help you improve team performance. Problem is your asking a management question in a development forum.

    Here are a few metrics: Estimates/vs/actuals at the project schedule level and personal level (see previous link to Joel's Evidence-based method), % defects removed at release (see my blog: http://redrockresearch.org/?p=58), Scope creep/month, and overall productivity rating (Putnam's productivity index). Also, developers bandwidth is good to measure.

    A: 

    Every time a bug is reported by the QA team- analyze why that defect escaped unit-testing by the developers.

    Consider this as a perpetual-self-improvement exercise.

    RN
    A: 

    Perhaps you can test CodeHealer

    CodeHealer performs an in-depth analysis of source code, looking for problems in the following areas:

    • Audits Quality control rules such as unused or unreachable code, use of directive names and keywords as identifiers, identifiers hiding others of the same name at a higher scope, and more.
    • Checks Potential errors such as uninitialised or unreferenced identifiers, dangerous type casting, automatic type conversions, undefined function return values, unused assigned values, and more.
    • Metrics Quantification of code properties such as cyclomatic complexity, coupling between objects (Data Abstraction Coupling), comment ratio, number of classes, lines of code, and more.
    Hugues Van Landeghem
    A: 

    I especially like and use the system that Mary Poppendieck recommends. This system is based on three holistic measurements that must be taken as a package (so no, I'm not going to provide 3 answers):

    1. Cycle time
      • From product concept to first release or
      • From feature request to feature deployment or
      • From bug detection to resolution
    2. Business Case Realization (without this, everything else is irrelevant)
      • P&L or
      • ROI or
      • Goal of investment
    3. Customer Satisfaction

    I don't need more to know if we are in phase with the ultimate goal: providing value to users, and fast.

    Pascal Thivent
    A: 

    I like Defect Resolution Efficiency metrics. DRE is ratio of defects resolved prior to software release against all defects found. I suggest tracking this metrics for each release of your software into production.

    Mark Kofman