views:

842

answers:

8

There have been many discussions, in StackOverflow and outside, of how you can judge a good programmer in an interview. Make him write code, check for common mistakes, observe his coding style, so on and so forth. All good.

But what is less discussed, and probably considered even less important, is how do you evaluate and judge a programmer in your organization after he or she has spent some time with you - say at annual appraisals? Obviously there will be a lot of evaluation on how good a team player he was, how well he communicated with others, how much effort he put in to meet the deadlines etc. (I am not condoning any of these practices but merely stating what happens).

But how do you judge a programmer in your organization based on the actual work that he has done over the past 6 months or one year? What 'real' metrics you will use?

Obviously we are past the times (hopefully) of using "number-of-lines-of-code" metrics. But what are the other factors that a team lead or manager or even his peers will evaluate to see how good a programmer he has been?

+1  A: 

I personally would say that a good performance metric, and one used at my day-job is the number of defects found in code created by the developer. Another key component is percentage of following company documentation, change control, and other policies. We have spot check audits, and failures of any of these items are considered major stopping points.

I would also evaluate and rate the persons attempts at improving/maintaining their techical ability and overall professional development. Lastly team participation and overall group interaction behavior would be on the list of items to review and evauluate.

Mitchel Sellers
Defects found when/how?
Vinko Vrsalovic
We track all changes to the systems, and defects are tracked in a separate bucket, and associated to a developer. Not a great policy, but it does sometimes help with quality.
Mitchel Sellers
I'd disagree: people who write great code quickly will have more defects than people who write bad code slowly.
Jim Puls
Jim, there is a very valid point here as well. There are other metrics that must be considered, in regards to percentage, severity, reasoning behind defects, if they are used.
Mitchel Sellers
Working where there are people who label desired changes as bugs, I'm not sure I'd feel comfortable with someone else determining what was a defect and then being evaluated on that. And how to determine who was at fault with a valid defect if multiple people were involved on different projects?
HLGEM
HLGEM - I can see your argument there as well. Our team, each project is a single person that is responsible, so it is a bit easier for us to handle
Mitchel Sellers
Hmmm, this means it's the same developer that find bugs. And then you evaluate him for how many bugs has he found in his code.
graffic
A: 

As @mitchel-sellers says - but also include some details about percentage of projects delivered on time, and efficiency (how long did similar jobs take similar-level developers?).

Dominic Rodger
+3  A: 

Given her assignments, does she:

  • make significant progress without unnecessary help?
  • request help when appropriate?
  • finish assigned tasks on time?
  • keep superiors informed early when hurdles prevent assignments from being finished on time? (Don't let the first time you tell us be two days before the deadline when you knew about the problem three weeks ago!)
  • look for opportunities to improve quality, functionality, automation, maintainability (not necessarily in this order) and bring them forward as items to do for herself or others (that the lead/manager kills them is far less important)?
  • expand her own scope by learning technologies outside of her normal day job (great, you know the windows code inside out, but do you know how it affects the unix port?)?
Tanktalus
She? A programmer? Seriously? :-)
David Johnstone
+8  A: 

"Real" code-based metrics are pretty bogus for evaluating engineers in a software organization, because the code is far from the most important contribution that a software engineer makes.

I look at the effect a programmer has on his or her team:

  • are people better off or worse off for working with this person?
  • does other people's code get better or worse when this person touches it?
  • does this person facilitate improvements or fight them?
  • is this person learning and growing on the job?
  • has this person been able to achieve his or her own goals?
Jim Puls
I agree with you, however measuring at least some of those items is hard!
Lars A. Brekken
+2  A: 

ruthlessly with respect to mutually-agreed-upon expectations

generously with respect to personal/stretch goals

if you don't have either of those, you cannot do an evaluation - it would just be an opinion

Steven A. Lowe
+3  A: 

There are several 'formal' metrics which could be used here but really the key factor is the peer reviews results.

Check your CI systems records. Get your Wiki statistics. Get everyone your developer worked with to fill a short review form and analyse the results.

  • Was he a team player or a lone ranger?
  • Was his code problematic for others in the team?
  • Did he commit code on time, did he break the builds more often then other people? Was his code following the team standards?
  • Did he help other team members in his free time?
  • What was his attitude when something bad happened?
  • How many Wiki updates the project KM had from him and how good they were?

Listen to his project manager. Listen to his team leader. Listen to the QA guys. You'll quickly understand enough about the developer if you know what to look for.

One thing to remember is that not everyone is the same. If the developer in question aspires to be a team leader look more for the leadership/coaching qualities. For the junior to mid levels look more for compliance with the rules set by others etc

Ilya Kochetov
peer review is dangerous. while it can provide feedback, some people out of personality conflict, professional jealousy etc can lead to a bad review when the team member is working perfectly fine.
MikeJ
MikeJ: that's exactly why you need peer reviews - if someone has a personal conflict with another team member that should be investigated
Ilya Kochetov
I agree with Mike, peer review is extemely inaccurate if you have snake on your team.
HLGEM
A: 

It comes down to setting up goals and targets for the developer to meet.

some of these goals are toward team/project growth:

  • delivery module X by Q2
  • assure test coverage exceeds 80% of all new code
  • drive team to adopt IEEE standard xxxx

some are personal

  • earn MCSA
  • new code follows documentation/layout guidelines.

The idea is that they need to be directly measureable and achievable and under control of the developer/manager. subjective goals tend to blind you to bad performance becuase you like the person.goals that unknowable such as productivity or defect counts are not well suited for getting a fair picture of performance because they are not provable either way.

MikeJ
+1  A: 

For evaluating software engineers, I've never found any metrics I really liked. They can all be gamed too easily and tend to encourage the wrong behaviors. (I suppose metrics might work better in sales where the primary objective is quantifiable.) Instead, I look at quality which includes some quantifiable things like amount of code or features produced and number of bugs found, but also includes things like quality of design, leadership, knowledge of technology, knowledge of business, and interaction with team members.

When it's close comparing two team members, I pretend my team has been split in two and I am one of the team captains. Who would I pick first?

C. Dragon 76