tags:

views:

599

answers:

7

I want to better understand the patterns that work and the patterns that don't work for successful code reviews.

  1. Should there be a difference be a Formal Code Review and an Informal Code Review?

  2. What level of documentation of the Code Review do people do?

  3. Can a Code Review be just as useful without the author of the code participating?

Thanks for your input in advance.

+1  A: 

Start by having a look at the concept of Fagan inspections.

This method really helps the quality of the reviewed code, and the review itself in fact!

And always remember to criticise the code and not the coder.

Rob Wells
Can you add a link to the Fagan concept.
lillq
+1  A: 

Using the Eclipse Jupiter plugin and following the process that it automates has worked very well for us. It's not too invasive or bureaucratic, but it still really helps for finding bugs and design problems.

Nicholas Trandem
+6  A: 

Best code reviews I have ever done involve a tool called Code Collaborator from Smartbear. The author upload the code to a server. Reviewers and observers can see differences between the new code and the old code and comment on the new code. Here are the best practices:

  1. The author comments on the code written to give a clue to the reviewer on where to start.
  2. The reviewer comments and add defects that will need to be fixed before the review is accepted. Only important things should be marked as defects. Not typo or things that you know the developer will fix before committing.
  3. Involve new developers as observers. This is a good training to get them familiar with the project.
  4. Involve as observer or reviewer "experts" on the piece of code

Anti-patterns:

  1. Use the code review tool as a design review tool. It is not suitable for that. Only review code for which a design has been agreed upon.
  2. Involve two many reviewers and observers
  3. Let the review linger in the system for more that a couple of days.
David Segonds
I'm a big fan of this tool as well. Commenting inline on the code in an asynchronous manner catches the vast majority of issues. That makes developers much more willing to do desk checks and in person walk-throughs for the bits that weren't obvious.
christopher_f
Thanks for the kind words guys. If you enjoy discussing code review strategies and best practices, or if you want tips for using Code Collaborator, check out our blog: http://blog.smartbear.com
Jason Cohen
+4  A: 

The pattern I find to be critical to good code reviews is having programmers with the time and motivation to really read and understand the code as reviewers. Reviewers who only give the code a cursory look and point out a few code style changes don't really contribute to the process.

AShelly
+1  A: 
  • Schedule reviews or inspections regularly and frequently. Review/Inspect whole modules when new, and delta + surrounding code for maintenance. Building up a backlog of stuff to be reviewed makes the whole process intimidating.

  • Everyone's code gets reviewed, from the most junior to the most senior, and anyone can comment without fear of recrimination.

  • Don't just review/inspect code; look at tests (especially for TDD) and test results too. We've detected some interesting bugs and test omissions by discovering that what we thought we were testing wasn't really what was being tested, or that we'd inadvertently used test data from the same equivalence class when we thought we were using distinct classes of data.

CJP
+2  A: 

I don't think there should be formal and informal as much as there are different types of time allowed for code reviews. If there is a bug to be fixed ASAP and the code change is just a few lines of code this may still get reviewed but likely not in the same way as if you are putting out a new ERP or CRM system by contrast. Also, in some cases there can be hours set aside to review some code while in others it should take less than 5 minutes.

The documentation can be in a few forms. It may just be an e-mail saying that Joe reviewed Bob's work and that this is approved to go into the next stage for a release. Alternatively, it may be a few pages of notes on what to change before promoting the code so that some design patterns are used and the code is cleaned up from its initial quick and dirty form.

As for the last question, I think the answer is sometimes. If you have some code that you want to get other opinions on how to optimize the code, then not having the author participate may be helpful in getting ideas and then having the author either make the change or justify why the change doesn't improve the code, e.g. if someone wants to put in a bubble sort algorithm this may be rejected because of other more efficient sorting algorithms like quicksort, mergesort and heapsort.

Edited to added a couple of patterns that I found useful: For bug fix code reviews, the pattern I like best is the following: 1) Show the bug in a testing/staging environment so that I can see what went wrong. 2) Show me where the code was changed and a brief explanation of why it was changed this way. 3) Show me that the code is fixed on your local machine.

In contrast, if one implements an API with a dozen methods then it may be better to have some documentation that summarizes what was used in the implementation and why some choices were made, e.g. what kind of list implementation was used like an array, hashtable, linked list, generic list, stack, queue, etc.

JB King
+2  A: 

My company, Smart Bear makes Code Collaborator (as mentioned by David Segonds). We also give away a free book called Best Kept Secrets of Peer Code Review. It's got some good best practices that we learned from working with our customers and looking at the other literature.

Here are some points:

  • Keep the code review small. Too much code at once is overwhelming, so if your reviews have too much code in them, review more often in smaller chunks.
  • Involve everyone, but not all at once. Code review is a chance for everyone to learn, reviewers and authors alike.
  • Remember, it's about the code, not about the author.
Brandon DuRette