views:

856

answers:

17

I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed

We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development.

What are the things you look for when doing a code review ?

  • These are the things I came up with
    1. Enforce FXCop rules (we are a Microsoft shop)
    2. Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety
    3. Adhere to naming conventions
    4. The code should cover edge cases and boundaries conditions
    5. Should handle exceptions correctly (do not swallow exceptions)
    6. Check if the functionality is duplicated elsewhere
    7. method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -)
    8. Do not pass/return nulls in methods
    9. Avoid dead code
    10. Document public and protected methods/properties/variables

What other things do you generally look for ?

I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small"

Or is code review very subjective ( and would differ from one reviewer to another ) ?

The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code)

What other objective/quantifiable system do you follow to identify good/bad code written by developers?

Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

+19  A: 

Most of the points you described are only a matter of code-formating or "surface" stuff :

  • adhere to naming conventions
  • avoid dead code
  • document
  • ...

All this could be checked using some automated tool : no need to have an experienced developper spend time going through the code to watch for that.

I don't know at all about .NET, but, for PHP, we have tools to check that kind of stuff ; considering .NET is often said to be "more industrial" than PHP, I would be surprised to hear that there is no tool to check that kind of stuff.


The automated tool can both :

  • Be integrated in some automatic build process, that runs every nights
  • Send e-mails reporting
    • warnings (for instance, a method is longer than 20 lines)
    • errors (for instance, a method is longer than 50 lines)

The mail can be sent to either all the team, or the guy who commited the code that doesn't pass a test -- or you could use some reporting web-interface (same note about .NET and PHP)


I would also add that *automated testin*g can help a lot, to detect a certain number of errors before the code is used in production.

And automated tests can also help with some metrics, I suppose.


Code reviews done by an experienced developpers also have another great advantage that you didn't talked about :

  • An experienced developper can often detect a wide variety of bug by just looking at the source code (I quite often find bugs when I do code reviews)
  • A code review done by an experienced developper will allow him to make comments and recommendations to the team :
    • he will try to understand the algorithms that are used in the code ; and possibly suggest betters solutions
    • just reading the code, there is often stuff you can see that an automated tool will not detect.

But for a code-review that goes deeper than just code-formating, you'll need more than half an hour...

Pascal MARTIN
This tool for .Net (well C# only right now) is StyleCop. http://code.msdn.microsoft.com/sourceanalysis
Bryan Anderson
A: 

It depends.

Some parts of the review are easily quantifiable (no FXCop problems, no StyleCop errors, no CAT.NET errors etc.)

Style however can be subjective - but as you say once you start being more specific (no method >20 lines) then you can measure it, and tools like NDepend can do that automatically. Some things will never be automatic though - checking edge case handling would require tests to do so, which brings up code coverage, and there 100% is an unreachable ideal in a lot of cases. Duplication checking is hard to do automatically. Null checks, well not sure I agree with you, but you may be able to write NDepend rules, or FXCop rules for that one.

The more tools the better, and if the tools allow developers to check their work before committing changes and for checks to be performed as part of the CI process then you're going to minimise the need for reviews.

blowdart
A: 

Cyclomatic complexity is one way to evaluate code that's 'not bad'.

In actual code that has high CC, I have a high "what's going on here, I don't remember" factor. Lower CC code is easier to figure out.

Obviously the Usual Caveats apply for metrics.

Paul Nathan
thats good but not working
AjmeraInfo
@AfermeraInfo: huh?
Paul Nathan
A: 

Code reviews are both subjective and objective. Rules like "the method body must be 20-30 lines" are subjective (some people might think 100 lines is OK) but if your company has decided that 20-30 lines is the limit, that's fine and you can measure that. I think that the point system you've come up with is a great idea. You'll need to re-evaluate it periodically as you find that certain rules need to have more or less weight in the scoring but as long as everyone knows the rules, it looks like a good system.

Other things I would look for:

  • clearness of code - sometimes a piece of code can be written in one line or several lines. The average programmer shouldn't have to spend several minutes trying to figure out what a line of code does. If he does, maybe the code should be rewritten in a simpler way. This is subjective but the key point is that the code should be immediately understandable by most programmers in your company.
  • check function input parameters - there should be some code to check that input parameters fall within acceptable ranges. This should also match the function documentation.
  • descriptive variable names - except in a few special cases (loop indices, etc), variable names should be descriptive. One resource you might want to take a look at for naming conventions, etc. is Code Complete
TLiebe
+2  A: 

As a rule of thumb, avoid spending any time in a code review doing something that could be done by machine. For example, your first item is to "enforce FXCop rules" but presumably that can be done by FXCop without humans having to also do it.

Greg Hewgill
+1  A: 

It seems like you are getting too detailed too fast. You should break it down more. You should observe the code for its code quality and for its feature compliance. You should have both separated out and that isn't even the end of the story... so here's my suggestion:

Code quality:

  • Automated checks:
    • Conformation of style: is the naming convention correct, are all code properly indented, etc.
    • Efficiency standard: check for memory leaks, complexity check, redundant variables, etc.
  • Actual Peer Review:
    • A simple walk through of the design
    • explanation of deviations from the automated checks
    • Ease of maintenance, talk about how you can maintain it and all
    • Testability: how easy is it to test this code? Got a plan?

Feature compliance:

  1. A review of the feature requirements and any changes since the requirements and/or design review
  2. Demo the functionality associated with the requirements and check them off one by one
  3. Discuss any additional requirements in the other aspects of the software encountered during implementation (like deployment plans, infrastructure, etc.)
  4. Explanation of any deviations from requirements, if any by that point.

If you can cover yourself on these two aspects of a code review, you're golden.

jghost
+12  A: 

Many good answers already here. I have a general addition though:

My experiences with code review is that it should be a combined effort to improve code, not a 'measure' to decide who is good or bad at their job. When it doesn't matter if you get a lot of remarks during your code review, reviewers will be more strict, hence giving suggestions to improve the code.

To improve the quality of checked in code enforce that review comments are processed (let the reviewer approve the processed comments) and also use static code checking tools to force a level of quality for the initial commit.

Thirler
+1 for your comment about not letting this become a comparison of who is better at their job. This would be bad for morale!
KarstenF
@KarstenF: True. Also DeveloperA might be working with a more complex task(more lines of code) whereas DeveloperB might be working in a simple task and might score less (on the points scale). It would be unfair to say that DevA did a bad job when there is no way to normalize both their jobs/tasks
ram
Also some developers might try to discredit their colleagues.
Thirler
@Thirler, this point is right on. Petty concepts (like grading) lead to pettiness.
Yar
+1 on this very important point. As soon as your process starts producing a number, people are going to game their code to boost their numbers. They write a lot of lines of simple code, for instance, so that their penalties/method rating is very low. Or, they spend all of their time finding perfect variable names. And then it becomes a political thing because no one is going to want to point out minor mistakes in their friend's code because that will LOWER THEIR SCORE and MAKE THEM LOOK BAD! Oh noes!In short, your heart is in the right place, but bad idea. Programmers are not show dogs.
leo grrr
A: 

A marking system sounds tricky to get right, but worthwhile to have as a measurement tool: you cannot improve what you cannot measure. But you should probably accept that some things will be difficult/impossible to quantify accurately. The tricky thing will be working out how many points each quality should score: for example, if adhering to naming conventions scores 2 points, then how many points for keeping methods small?

Maybe something like a simple checklist would be better, so that code can be marked as conforming, partially conforming or not conforming to a particular quality. Later, you can add scoring to the checklist once you see what quality issues come up the most often, or cause the most problems.

The review process should also be flexible enough to allow code to fail parts of the review, provided that this can be justified and documented. Blindly sticking to some code quality standard which causes a component to become unnecessarily complex/unmanageable is a bad idea!

KarstenF
prefect things is happen.
AjmeraInfo
+3  A: 

My only advice would be to avoid making your code review process too strict - the most important thing is that the code review actually happens and that it is taken seriously.

The more exhausting the process is for the reviewer the less likely it is that code reviews will happen and that they will be taken seriously rather than just seen as an annoyance. Besides, the real value of code reviews lies in the ability of the reviewer to use their own judgement automated tools can be used to check for things like whether FXCop rules pass.

Kragen
+100! I mean, +1, but really, isn't this the whole point: for code reviews and unit tests (and other stuff), less is more. This is only true because more is more only until it becomes zero :)
Yar
+14  A: 

Grading individuals in a review is counter to most successful systems I've worked with, maybe all. But the goal I've been trying to reach for 20+ years is fewer bugs and increased productivity per-engineer-hour. If grading individuals is a goal, I suppose reviews could be used. I've never seen a situation where it was required, as a worker or as a leader.

Some objective study (Fagan, etc.) and a lot of popular wisdom suggests that peer relationships facilitate code reviews aimed at reducing bugs and increasing productivity. Working managers may participate as workers, but not as managers. Points of discussion are noted, changes to satisfy reviewers are generally a good thing but not required. Hence the peer relationship.

Any automated tools that can be accepted without further analysis or judgment are good - lint in C, C++, Java. Regular compilation. Compilers are REALLY good at findng compiler bugs. Documenting deviations in automated checks sounds like a subtle indictment of the automated checks. Code directives (like Java does) that allow deviations are pretty dangerous, IMHO. Great for debugging, to allow you to get the heart of the matter, quickly. Not so good to find in a poorly documented, 50,000 non-comment-line block of code you've become responsible for.

Some rules are stupid but easy to enforce; defaults for every switch statement even when they're unreachable, for example. Then its just a check box and you don't have to spend time and money testing with values which don't match anything. If you have rules, you'll have foolishness, they are inextricably linked. Any rule's benefit should be worth the foolishness it costs, and that relationship should be checked at regular intervals.

On the other hand, "It runs" is no virtue before review, or defense in review. If development followed the waterfall model, you'd like to do the review when coding is 85% complete, before complicated errors were found and worked out, because review is a cheaper way to find them. Since real life isn't the waterfall model, when to review is somewhat of an art, and amounts to a social norm. People who will actually read your code and look for problems in it are solid gold. Management that supports this in an on-going way is a pearl above price. Reviews should be like checkins- early and often.

I've found these things beneficial:

1) No style wars. Where open curly braces go should only be subject to a consistency check in a given file. All the same. That's fine then. Ditto indentation depth**s and **tab widths. Most organizations discover they need a common standard for tab, which is used as a large space.

2) `Ragged

   looking

text that doesn't

   line up is hard to read 

for content.`

BTW, K&R indented 5 (FIVE) spaces, so appeals to authority are worthless. Just be consistent.

3) A line-numbered, unchanging, publicly available copy of the file to be reviewed should be pointed to for 72 hours or more before the review.

4) No design-on-the-fly. If there's a problem, or an issue, note its location, keep moving.

5) Testing that goes through all paths in the development environment is a very, very, very, good idea. Testing that requires massive external data, hardware resources, use of the customer's site, etc, etc. is testing that costs a fortune and won't be thorough.

6) A non-ascii file format is acceptable if creation, display, edit, etc, tools exist or are created early in development. This is a personal bias of mine, but in a world where the dominant OS can't get out of its own way with less than 1 gigabyte of RAM, I can't understand why files less than, say, 10 Megabytes should be anything other than ascii or some other commercially supported format. There are standards for graphics, sound, movies, executable, and tools that go with them. There is no excuse for a file containing a binary representation of some number of objects.

For maintenance, refactoring or development of released code, one group of co-workers I had used review by one other person, sitting at a display and looking at a diff of old and new, as a gateway to branch check-in. I liked it, it was cheap, fast, relatively easy to do. Walk-throughs for people who haven't read the code in advance can be educational for all but seldom improve the developer's code.

If you're geographically distributed, looking at diffs on a screen while talking with someone else looking at the same would be relatively easy. That covers 2 people looking at changes. For a larger group who have read the code in question, multiple sites isn't a lot harder than all in one room. Multiple rooms linked by shared computer screens and squak boxes work very well, IMHO. The more sites, the more meeting management is needed. A manager as facilitator can earn their keep here. Remember to keep polling the sites you're not at.

At one point, the same organization had automated unit testing which was used as regression testing. That was really nice. Of course we then changed platforms and automated test got left behind. Review is better, as the Agile manifesto notes, relationships are more important than process or tools. But once you've got review, automated unit tests/regression tests are the next most important help in creating good software.

If you can base the tests on requirements, well, like the lady says in "When Harry Met Sally", I'll have what she's having!

All reviews need to have a parking lot to capture requirements and design issues at the level above coding. Once something is recognized as belonging in the parking lot, discussion should stop in the review.

Sometimes I think code review should be like schematic reviews in hardware design- completely public, thorough, tutorial, the end of a process, a gateway after which it gets built and tested. But schematic reviews are heavyweight because changing physical objects is expensive. Architecture, interface and documentation reviews for software should probably be heavyweight. Code is more fluid. Code review should be lighter weight.

In a lot of ways, I think technology is as much about culture and expectation as it is about a specific tool. Think of all the "Swiss Family Robinson"/Flintstones/McGyver improvisations that delight the heart and challenge the mind. We want our stuff to work. There isn't a single path to that, any more than there was "intelligence" which could somehow be abstracted and automated by 1960s AI programs.

Bill IV
That's a good answer, particularly with regard to grading people - this should not be the point of a code review.
Paddy
A: 

If you want people's code to be more standardized, without making them "waste their time on formatting", as some devs will complain about. Invest in a tool like ReSharper as it makes fixing formatting and other re-factoring tasks an almost automated process.

Chad
A: 
  • If a machine can check it, people shouldn't.
  • Only one checklist item: Is every error case handled correctly everywhere?
  • Use Code Review to improve quality and transfer knowledge
  • Don't use Code Review to identify "bad" developers
  • Ego dings are more effective than explicit points
  • Keep it short - 90 minutes and 500 lines is HUGE
wowest
+3  A: 

I think your grading system is a bad idea. What is the point? To identify good and bad programmers? Everyone in that code review can form an assessment about a particular programmer based on the code presented in the code review better than some arbitrary assignment of values to a somewhat arbitrary set of characteristics. If you want to identify good and bad programmers...ask the programmers. I guarantee humans can do this assessment better than your silly heuristic.

My suggestion would be to try and improve code reviews so that people share ideas and opinions openly in a non-judgmental and non-hostile environment. If you could do that, you'll be 100X better off than casting judgments on programmers based on your silly checklists which purport to do a good job of assessing programmers. I think a lot of programmers are already proud and hard on themselves if they do poorly in code reviews; I question whether further 'punishment' for poor performance is generally helpful.

robarson
A: 

I actually care more about the "subjective" stuff than anything else, frankly. What I want from a good code review is someone to check my logic, not my typing. And that's what I focus on when I give a code review.

The general format I like to take is:

  1. What are we fixing?
  2. What was causing it? (look at the code)
  3. How are we fixing it?
  4. Show me the new code
  5. Show me the code working

Without that, just looking at diffs tends to give input on minor issues or stylistic points. I'm far more concerned with whether the logic is correct, the approach used overall is good, and whether the solution will be maintainable.

As an example, I recently looked at some code by a co-worker. The original issue was an FxCop violation. But what was being done was attempting to determine the presence or absence of a Windows feature by checking the version number. My main input was that this was a fragile way to do so, and we'd be better off querying the service directly, as the mapping between feature existence and Windows sku could change in the future, and was not future-proof at all.

kyoryu
Not clear from your answer: did FxCop catch that fragility, or did you?
Yar
+2  A: 

If you can measure it, if it's objective, quantifiable, then try to have a tool do it. Where you need an experienced reviewer is for the fuzzy subjective stuff.

kiwicptn
100 hours to make the tool, 1000 saved using it.
Yar
+1  A: 

A lot of good comments have already been made on issues of style, which is important. On a team project, it is valuable for all the code to look like it was written by a single author. This makes it easier for other members of the team to drop in and fix problems when they occur. What quantitative measures you choose to ensure this broader goal are less important.

One additional item is ensuring that the code matches up with the overall architecture that has been agreed upon for the rest of the system. Similar problems should all be solved the same way. If the application logic has been split across layers, does the code being reviewed break apart it's functionality the way that the rest of the system does? Alternatively, does the code under review teach something new that should be pulled back across the rest of the system? Just like the style checks ensure the code all looks the same, the architecture review should ensure that the code all works the same way. The emphasis here again is on maintainability. Anyone on the team should be able to drop into this code and have a grasp of what is happening right away.

The grading idea seems like a disaster in the making, but you know your team best. It's possible that they will be motivated by such a system, but I think it more likely that people will start to worry more about their grade than solving problems. One of the really valuable side effects of code reviews is the mentoring opportunities they offer. The reviewer should treat the person who wrote the code as someone they are mentoring. Each issue found is not a problem, but an opportunity to create a more knowledgeable and sophisticated team member and a more tightly knit team overall.

Jim Blanchard
+1  A: 

I could write a few paragraphs, but I would only gloss over what Karl Wiegers explains in "Peer Reviews in Software: A Practical Guide". I think that his book contains explicit and concise answers to your question (and much more).

CesarGon