views:

725

answers:

13

Given a team of developers collaborating on the production of a piece of sofware. How would you go about assessing individual developer performance and quality?

I'm mostly looking for non subjective and (as far as this is possible) rational criteria by which the quality of the work of a developer can be judged (if not measured).

I know this is a very difficult question to which there may even not exist a good answer. And if it is true that the question cannot be answered then what are the consequences?

Don't flip the bozo bit

+3  A: 

Do they complete tasks you give them? Are they willing to work with others? Do they document their work? How does their error rate compare to the average of the development team taking into account task dificulty?

Jared
+2  A: 

This is a very difficult question to answer as the sum total of the team's output may be quite different to that of the individual developers.

Some developers are hard-core coders. They get down to the keyboard very quickly and produce very quickly. Others spend more time thinking. How can you measure these against eachother? Perhaps on their individual outputs but, as we all know, the outputs are notoriously difficult to measure. Lines of code, features, bug counts, etc. are all poor statistics.

To complicate things more, you have the enablers. These people may produce very little code but, by taking away problems, can enable other programmers to be much more efficient and effective than when they are working alone.

BlackWasp
+3  A: 

Some things to keep track of are :

  1. Bugs linked back to changes or features they've implemented
  2. Code that they've done which is of questionable quality (only irrelevant stuff, not stuff you personally, for what ever reason, don't like)
  3. They don't keep making the same mistakes
  4. The number of innovative concepts they implemented
  5. Supportive contributions to the team (setting up the build, etc..)

Most of all, though, you have to keep track of this over the course of your working relationship to get a well balanced view. If you wait till review time, you will have a selective memory (good or bad).

John MacIntyre
+2  A: 

One criteria I've found is ability to create abstractions. For instance, a fellow developer asks a question about some programming issue. Then a few days later, they ask about the exact same issue but with some minor difference. The problem is that I was able to parametrize the problem so that it fit a large class, but the other person saw the new issue as a totally different problem. Over time, the failure to find and abstract patterns will be crippling for a developer and their code.

Of course, the problem with evaluating this way is that everything is relative to the person doing the evaluation. You might look at my work and notice abstractions that have not occurred to me. If so, my evaluation will be less useful than yours.

Jon Ericson
Especially in OOP, the ability for a developer to refactor and create abstractions is so indicative of their talent.
Chris
OTOH, some people over-abstract, especially in the OOP realm. ;-)
Jon Ericson
Sure, it's probably much more important in designing class libraries, APIs, domain-level logic, back-end, etc. :)
Chris
+2  A: 

If you're looking for an empirical, quantitative assessment as opposed to a subjective, qualitative one, good luck! :)

Sure, code coverage, analysis with tools like NDepend, and bug tracking will help, but not everyone on a team is trying to solve problems with the same complexity.

I know this wasn't constructive; forgive me for my pessimism.

Chris
I appreciate your participation in this thread and I agree that a purely quantitative assessment is little more than a pipe-dream.
Seventh Element
+9  A: 

Honestly, I would not measure individual developers performance.

I would put the team in charge of the product development and assess team velocity after a few iterations.

Other methods (number of defects by line of code, number of lines of code produced, number of defects fixed by time period, etc.) may seem objective at first but are difficult to gather and do not account for the overall work done by a developer.

I would focus on the results achieved by a cross functional team.

Are the customers happy with the product delivered and does it contain the feature they need? Does the product sell (assuming you are selling the product)?

David Segonds
This sounds great in an idealistic sort of way, but what do you do about the situation where you have 4 great developers, and one crap one? Just let the 4 good ones carry the crap one forever because the "team" has overall great productivity?
Orion Edwards
+2  A: 

Do they test their code? I don't mean formally (though that is a plus). Rather, do they take a few moments after cranking out a bunch of changes to make sure it works with the larger system before committing/releasing it.

Part of this is the ability to visualize the bigger picture. Part of it is simple professionalism. Part of it is the ability to see how their changes might fail in addition to how they could succeed.

Jon Ericson
A: 

I do know of a lot of things not to measure:

Number of bugs fixed. Programmers who fix lots of bugs may be programmers who introduce lots of bugs, or programmers who fix bugs that customers don't care about, or who fix bugs sloppily (to save time). (I'm that last one.)

Number of features implemented. Again, this may mean that they're being hasty and sloppy, or that they're implementing features that are easy, regardless of importance.

Lines of code written. More code means more bugs. It also means more time spent writing it. It doesn't necessarily mean more functionality.

Jay Bazuzi
+8  A: 

Disclaimer: I'm not a manager and I don't know what I'm talking about, but here's what I'd do based on my experiences thus far.

In each developer's private performance review (or some other suitable private meeting), ask them "so how do you think the other guys in your team are doing?". A lot of developers will be shy and say nothing or just go "ummm, fine", but in aggregate I believe it may give you a picture of who is excellent, or who is not up to scratch.

This is by no means a quantitative measure (you can't produce such a thing), but it will give you a good basic idea of where to start

Orion Edwards
The reason I selected this as the accepted answer is that I like the idea of peer assessment. On the other hand I also like the answer given by David Segonds on not focusing on the individual but on group performance.
Seventh Element
This might be a risky approach. You're essentially asking programmers to be involved in company politics (no matter how well you disguise your question). And programmers hate company politics. Also, crappy programmers might give you very inaccurate info. (I'm not a manager as well, so this could be completely wrong.)
Jivko Petiov
Good point Jivko. Although programmers profess to hate company politics, I think what they actually hate is *management* politics. Many programmers are as gossipy and political as anyone else when it comes to things they don't consider "managery"
Orion Edwards
+1  A: 

Do they take notes? and have their own work-plan or task-list? Are they organized?

Do they get tasks accomplished, to the standards given? Do they not "forget" about things they have been informed about?

Do they accept feedback......and then actually change?

Glennular
+4  A: 
  1. Ask each programmer what they've learned/studied on a monthly basis. Programmers who rarely read or investigate new approaches are probably going to be on the lower end of the skill scale

  2. Take note of how often a programmer needs help with their projects. Collaboration is a good thing, but programmers with deficient training and/or problem solving skills tend to lean on other programmers for help too often.

  3. When it comes time to release the current build to your clients, is there typically a flurry of support calls, or do the transitions go smoothly? When the programmer checks his work in, does it typically break the current build?

  4. When a programmer speaks up in meetings or code reviews, do they communicate well? Or do they ramble, make confused statements, etc.

  5. When given a well defined set of requirements, can the programmer estimate his work and deliver according to the estimate?

I agree with other comments that you shouldn't make a big deal out of individual performance, especially over the team effort. However the reality is that most companies have different job titles / pay grades, and a responsible manager will try to be as objective as possible in giving someone a promotion and/or pay increase.

Brian Vander Plaats
Point number 5 is well-said.
Thiyagaraj
+1  A: 

Do not measure individuals if their efforts are part of a team effort. This is a very difficult thing to grasp, but it is necessary to prevent all sorts of dysfunctional reactions to anything you do based on measurements that do not perfectly align with the result you want to get.

In fact, Joel talks about this all the time:

http://www.joelonsoftware.com/news/20020715.html http://www.joelonsoftware.com/items/2008/01/22.html

Another way to think about it is to measure one level up to account for statistical variation that might be inherent in the activity you are trying to measure. Or just go read some of Deming's stuff on management.

MSN
Thanks for your reply. I found it very useful!
Seventh Element
A: 

How exactly do you judge how well a programmer performs?

Not very, if at all ;-)

Jonas Kölker