views:

1904

answers:

9

Is there any measurement technique that takes into consideration not only how many artifacts were generated (i.e. lines of code, use cases, etc), but also considers the quality and effort involved?

What is a good measurement that we can deploy that will not hurt the developers?

Every now and then there is a client or a manager who wants to deploy some technique that a vendor is selling at the moment. Unless there is sound solution that we can point to, all developers face the trouble of some unfair measurement technique (like lines of code - nobody knows how much time you've spent thinking about the problem in order to come up with a solution to finally generate those lines of code).

A: 

Maybe the ration Qty of Software Created / Bugs in them. We can add performance here, the problem really is: Relative to what?

LuRsT
A: 

The only good measure I have is are the developers estimate accurate and met. Are the estimates typical longer or shorter then other developers. And what is each dev's bug count per feature.

Aaron Fischer
Still too many variables there. This assumes all features are equal and all tasks are equally easy to estimate.Also, someone woefully bad at estimating the time for a task could still be a great programmer. You just don't trust their estimates!
Draemon
Dev bugs per feature is a good idea, however a programmer's estimation in-accuracy shouldn't be counted against him. IMHO, it should be calculated, and applied against future estimates. Wether it is off by 2% or 500% is irrelevant as long as you know what it is. (sincerity intended)
John MacIntyre
These are both terrible. Estimation is a black art at best, and programmers' estimates are often corrupted by managers. Often the best programmer is put on the hardest subproblem, where a higher bug count may be expected.
Glomek
+2  A: 

The simple answer is no. The only ultimate measure is the success or failure of a project - and that depends on all developers, project leader, and client.

This is a broken way to think about software development. You need to make a subjective decision about their performance, but also consider why it's good / bad and if this is actually their fault. Can the company improve this with training / a change in management style?

Draemon
+25  A: 

All my experience and readings lead to one, consistent, conclusion: no formal and objective measurement of the productivity of programmers is fair.

In the same way that programmers are required to use their judgment in deciding which problems to solve and how to solve them, managers are required to observe their reports and use their judgment in evaluating how each one is contributing.

Humans have evolved highly sophisticated wetware for the purpose of assessing relations and members of a group. Sure, this wetware can be gamed, but any "objective" assessment technique can be gamed too, and rather more easily.

Classical reasons why objective measurements do not work:

  • Some good programmers focus on hard problems, they have little throughput, but deliver high value.
  • Some bad programmers produce a lot of bug tracker traffic by being excessively fine-grained, and waste everyone's time.
  • Some good programmers don't actually write much code at all, but spend a lot of time teaching and helping other team members.
  • Some bad programmers deliver tons of checklist points, but aggressively externalize as much work as they can, in particular design, refactoring and testing work.

You can fill whole books with examples like that.

ddaa
+1 for the use of the term "wetware" :)
Svante
Excellent points
Eran Galperin
Good consise answer
ChrisF
+3  A: 

Good programmers/technicians recognize differences between good and bad ones. No metric will replace it. They may help you, but on the other hand they may lead you to false conclusions equally easy.

Suppose a programmer that fixes almost no bugs, and has very small number of commits to the repo. Poor performance? Maybe he is debugging silent data corruption in NT operating system kernel (true story).

phjr
+2  A: 

Check this Google Tech Talk:

Measuring Programmer Productivity

CMS
+1  A: 

How about something like hockey's +/- rating? If you're on the ice when the team scores, you get a +1. If you're on the ice when the other team scores, you get a -1.

Obviously this wouldn't work in the short run, but could be used to evaluate people over the long run.

It would also take into account the "Brilliant A--hole", someone who is a great programmer, but a lousy team player.

It would work best in an agile environment where coding is done in 2 or 3 week sprints. At the end of each sprint, we could evaluate each teams success or failure, and give them a +1, -1 or 0 score.

chris
+6  A: 

There is no objective measure that cannot be gamed, SLOCs, bug counts, fix counts, you name it. I suggest you get a copy of this book:

http://www.amazon.com/Measuring-Managing-Performance-Organizations-Robert/dp/0932633366

and read it. Then give it to your boss with a synopis (because there's no way s/he's going to read it all - it's hard going at times). But it gives a pretty good summation of the reasons why these programs fail in all but the most simplistic situations, and software development isn't by any stretch of the imagination a simplistic situation.

What management need to grasp is that metrics like these aren't just non-functional, they actively make things dysfunctional - testers start fighting with devs over bug reports, devs compete for jobs more likely to hit targets etc. You can destroy a department with this nonsense.

Bob Moore
Good point about metrics causing organizational dysfunction.
ddaa
It's a shame that book is out of print, by the looks of things.
Paul Stephenson
Agreed on the dysfunction thing.
David Thornley
+10  A: 

The key issue here is that you're trying to measure something that isn't quantifiable.

You want to measure things like "how good is this developer" or "How productive is this developer".

But the things you can measure are things like "Number of defects resolved", "Number of lines of code written" and "Number of hours worked".

You can use measurable things as a surrogate for the things you want to know, but you run the risk of gaming the system.

An example I recall reading years ago (can't find it with a quick google, so no attribution, but it's not mine):

Consider Adam, a developer. He'll typically work 50 hours a week and turn in around 2000 lines of finished code. The quality of his work isn't high - other developers will often spend 40-50 hours fixing things up in testing, but management isn't aware of this.

Now consider Bradley, another developer who sits next to Adam. Bradley has "a life" and won't normally work more than 40 hours a week, though he's happy to pitch in when required. He typically turns in 1500 lines of finished code each week - code that works. It's rare for the testers to find any issues with his code.

Finally, meet Candice. No one is quite sure how many hours she works - sometimes she's in the office before everyone else, but it's not unusual for her to leave at lunchtime. Some suspect she only works 30 hours a week - but she delivers on commitments, so noone is really worried. She's good at spotting redundancies and simplifying code - in a typical week she'll delete as much code as she writes, leaving the system more functional for the same amount of code.

Who's the better developer? Adam works more hours and completes more lines of code; Bradley delivers fewer lines of code but only works standard hours; Candice never seems to add to the codebase, works fewer hours than contracted, but features assigned to her pass testing when required.

In September 2006 I wrote about the Dangers of KPI choice on my blog - see What are you Measuring?

Bevan
Robert Pirsig got an entire _book_ out of discussing what we mean by quality :-)
Bob Moore