tags:

views:

316

answers:

11

What is the best way to set up a code review process as part of your overall development process? Can you share some thoughts/experiences on successful or failed code review processes that you have set up or been involved in?

Some specific questions:

  1. Who reviews the code? Should it be a subset of your developers (for whom code reviews is part of their job description)? Or do you randomly assign pieces of code to developers to review (basically meaning that all developers perform reviews at some point)?

  2. Is it advisable to have only one developer reviewing a piece of code, or should you have 2 or more developers to review a piece of code as a group?

  3. What granularity of code should kick off a review? Every commit? Every new feature/bug fix? Or some other option?

  4. What software tools are available to help with code reviews? What sort of software/tools have you found useful? What features should I look out for when looking for these sorts of tools?

Thats a lot of questions! I have more, but I think getting some insight from all of you with experience/information will be very useful.

+4  A: 

This may seem like fobbing you off, but I recommend requesting a copy of Best Kept Secrets of Peer Code Review. It's a fantastic book and it'll cost you nothing (because SmartBear produce software to support code reviews).

Update: Unfortunately, it appears they're no longer shipping outside of the US. If you're based in the US, get it! If not, with luck they'll start shipping outside of the US some time soon.

Keith Gaughan
Unfortunately they're only shipping that book within the USA now.
MarkJ
Really? Pity about that. Looks as if I got my copy just in time.
Keith Gaughan
For the record, they're now shipping again in US, Canada and quite a number of other countries. The question is whether that book is biased in favour of their tool, or objective enough. If I have to read about it, I'd rather have an objective comparison of existing tools, for example.
RedGlyph
+2  A: 

There is a broad spectrum of code reviews from informal desk checks where one coder asks another to take a look at their code to formal inspections that are very structured and, well formal. I would start less formal and build until you find the right "size" for your organization. Also, don't just think about reviewing code. Any production of the process of building software can (and probably should) be reviewed.

Jim Blizard
A: 

Pair programming ?

Surya
+2  A: 

In my opinion, based on previous experience peer review works well, everybody participates. It's the responsibility of the developer to get get their code peer reviewed prior to checkin of that code. This involves them finding another developer on the team to review their code and they can't checkin their code to source control unless it gets reviewed. This normally works well on engineering task/bug level, but there's no reason why it can't work on a user story.

There are several tools: * Jupiter - an Eclipse plugin for peer review, can keep a historical list of review items. * Crucible - web based code review (http://www.atlassian.com/software/crucible/) * Review Board - web based code review (http://code.google.com/p/reviewboard/)

As for tool criteria, that's really based on what your manual process is at present. I'd try to get a rough manual process done before looking for a tool to automate the job.

Jon
Review Board is only hosted on Google Code, but that's where Google's involvement ends. It's actually an internal VMware project that they opened up.
Keith Gaughan
Amended, thanks Keith...
Jon
A: 

1.Who reviews the code? On my team a code review is sent to the entire dev team working in that feature area - it's only about 6 of us right now, previously it has been as high as 20. It is then up to the recipients to decide if they should review the code or not. his only works because our whole team is committed to the process and voluntarily take on code reviews as they see fit.

2.Is it advisable to have only one developer reviewing a piece of code...? I would say at least 2 pairs of eyes need to review it, but our rule is that only one other person must review code.

3.What granularity of code should kick off a review? Every check-in / commit. Exceptions are a) trivial changes - just check those in, and b) emergency check-ins to fix build-breaks - check in then review afterwards.

4.What software tools are available to help with code reviews? Can't tell you about the ones we use because they are proprietary and internal use only. However, you need a good version control system, from then on it's fairly straightforward - you basically send the code around (zipped up in an email if you have to) and other developers can "diff" the code against the checked in code. Depending on your system there may be tools to help you do this more efficiently.

This is what we do for basic code reviews, there are lots of other options, you can make it more formal, have meetings with minutes and actions etc. I recommend you read up on code reviews in Code Complete, McConnell is always worth reading and he knows his stuff.

Steve Haigh
A: 

This is based on previous experiences with what I consider to be the best practices I've seen:

  1. Depending on the scale of the code change, this can range from another developer to the whole team in the case of something big being added to the overall architecture. For small changes, the review is mainly a way to double check what was done while on the larger changes, everyone should know what is going on and be able to ask questions to those who put in the new functionality.

  2. Similar to above, it varies. For large changes, it makes sense to have more eyes on it while if the code change was to fix a typo, that shouldn't merit the same attention.

  3. Almost any completion of a code change: Bug fix, enhancement, or new functionality should merit some form of review.

  4. The diffing in Subversion is useful for some reviews though I'd think there should be other stuff out there, I just haven't seen it.

A few other thoughts:

Code reviews can vary from seeing the differences in code and results of the application to going line by line through pages of printed out code. I like a more formal structure which tries to be simple in terms of communicating what was changed, why and seeing that it does what it should do. For big changes, this can take hours but then if the developers that did the work went on vacation and it had to be changed ASAP it would take longer to understand what is going on in some cases, I fear.

JB King
A: 

In my experience this is a challenging thing to implement - there is a lot of resistance - not overtly - most people - even managers - agree it is the best thing ever, but when push comes to shove no one wants to make time for it.

So, while you are figuring out the process, just let others know you are happy to review their code and ask others to review code you wrote.

Keep some statistics on the time spent looking at code and issues found. Don't go overboard - just keep enough to show whether it was useful time or not.

The best book I have read about this topic is not that freebie one - which is basically a sales pitch, although a good start but this one:

http://www.amazon.com/Handbook-Walkthroughs-Inspections-Technical-Reviews/dp/0932633196

Don't get caught up in the process - this is about improving the work products.

Tim
A: 

Back when I used to be a team lead at another job, I used to have projects with 2 to 6 people on them, that ran for about 2 to 6 months. Here is what I did.

  1. Have the entire team review all project source code changes every 1 to 2 weeks.
  2. Label the project each time a review is done (in subversion you might simply note the project rev)
  3. Print out the diffs on existing file from previous checkpoint to the new checkpoint, and of course new files in their entirety. Give each person on the team a copy, and at least a day to scan the listings, should they be so inclined.
  4. Schedule one or two short meetings in a conference room, and discuss why the changes were made and what they get done. Discuss any needed corrections, and steal good ideas from each other. Try not to go overboard on corrections, and favor "next time try this" over "Thou Shalt"
Roboprog
A: 

There should be no more than 3 code reviewers. Include one person from UI, one person from middle tier, and one person from the server team. If that's not applicable, then assemble a diverse few developers.

Set clear goals. Avoid getting side tracked. Most importantly, declare up front what is out of bounds. We avoid discussing style-related issues and comments, unless there is a total lack of them. You don't want to start a debate on brackets and tabs.

Slidell4life
A: 

Where I'm currently working, we split reviews into two types:

"Warm" reviews happen just pre-commit, per checkin. One or sometimes two other people (usually "stakeholders", but sometimes entirely unrelated coders -- to help spread expertise and get fresh eyes) comes over and sits with the author. The author then walks through the diffs, commenting on changes as necessary.

"Cold" reviews are a bit more like formal inspection; a coder who is not guided by the original author reviews the change -- usually post-commit. Typically an e-mail with a list of comments is the result (although we're experimenting with ReviewBoard).

Warm reviews are mandatory for all checkins. When we hit alpha/beta, we typically start doubling up on the warm reviews -- two others will attend rather than just one.

Warm reviews are mostly for catching your own mistakes, as an author, and for spreading understanding around, and catching the occasional typo. They also help junior people come up to speed with our coding standards and best practices.

Junior programmers are not allowed to be reviewed by other juniors, due to some issues with that in the past.

Cold reviews are usually reserved for larger checkins or groups of checkins. We use these for offsite coders, for initial system commits (i.e. "proof of design" or "skeleton" checkins), and for large, risky, or intrusive commits.

Cold reviews are generally best for finding deeper structural issues or deep logic or error handling bugs.

We also try to get people to write up really fast "mini design docs" before launching into any given coding task. (The idea is you'd write one to three of these a week.) Format is up to you -- you can use UML, a bulleted task list, etc. (We even sometimes get photos of a whiteboard that has been used in a meeting about a feature.)

The goal is to keep MDDs short and sweet, in most cases. More formal design docs can be set up as needed, for larger systems. (Since we're in game development, we tend toward "remaining agile", which necessarily obsoletes most large docs pretty quickly.)

MDD reviews are usually quick once-overs on the doc by all interested parties -- designers, artists, and other coders.

leander
A: 

I've previously used a 'pair programming' approach. Not in the XP sense, but in the sense that every task has a coder and a reviewer nominated before the coding starts. The coder and reviewer work together identifying the correct solution so the reviewer knows what is meant to be done to perform the task.

All work is done in a branch by a single coder (or trunk and don't check in until the work has passed review) and a diff is generated for the whole branch. If using something brain dead like SVN, a checkout per task is needed so changes for each task are isolated. If using something less brain dead that's not strictly necessary. The branch method works best because then the reviewer can generate their own diffs off the branch and there's no need to email them around (and you can check in works in progress without destroying the main line).

Add a required state to your work flow in the issue tracker called 'review' (and one called 'integration'). When the dev has finished reviewing the code they can add their comments to the issue tracker and forward the ticket back to the developer for rework or merging depending on whether it worked well. If it worked well set it to the integration state and if it was bad put it back to 'in progress'.

That process has worked reasonably well in all the places I've seen it implemented.

Adam Hawes