tags:

views:

313

answers:

16

After reading this blog post, I'm thinking that it might be a good idea...occasionally.

the neurotic paranoid in me began to wonder how close of an eye they were putting on my code. Especially after a few times where I noticed bugs right before I checked in the supposedly reviewed code.

So I came up the concept of Jester bugs. Jester is a unit test tester for Java. It works by making a syntactically correct change to the original source code under test, compiling it, and then running the unit test suite. If the unit test suite results in everything being A-okay, then you know that the test suite is flawed because it missed the monkey wrench that was thrown into the code.

Jester bugs are bugs that you intentionally introduce into your code, to see if your code reviewer is really spotting bugs in your code or just giving it a pass.

-- Al Sweigart

+4  A: 

No. Emailing a few hundred or thousand lines of code to somebody and saying "Please review this" is never going to result in a thorough review. If you think people aren't reviewing the code when you do this, you're right. Intentionally putting bugs into the code is just turning this ridiculous practice into a game.

The only way to properly review code is to get people together and go through it together.

Kristopher Johnson
+3  A: 

Well, I know that if I ever did this, I would forget it and thus purposefully introduce a bug into the codebase. So, I'm not sure this is a good idea unless you've got a good memory.

Konrad Rudolph
A: 

I wanna say "Are you serious?" but I can kinda see your point..

I probably would not do this though, but rather actually wave the code in their face and ask for actual answers and get them to engage their brain.. Maybe even plant one to get their attention there and then, maybe they will then feel more compelled to review it properly after check in..

I wouldnt like to check in bugs on purpose, just through fear of me loosing the note where the bug is ^_^

Rob Cooper
+1  A: 

It's something I've done before when discussing rules for an online game, we'd add in silly things so that we'd know that each of us were actually reading the suggestions. It was all good humoured and very helpful.

Teifion
+2  A: 

something about intentionally adding bugs to your code to catch your co-workers just doesn't sit right with me. I say you just do Formal Inspections. Then you know your code was reviewed, plus they have been shown to find more bugs.

Craig H
+3  A: 

I don't agree with the way the blogger is doing it. Code reviews should be done in person, in a room, with everyone there. But the idea of putting a bug in the code still stands, to see if people (1) read the code before they came to the meeting and (2) are paying attention and thinking during the meeting.

Thomas Owens
A: 

I remember seeing someone once suggest that you should not compile your code before a code review, and then try to find compile errors at the review. The theory was, once the review was over, you can actually compile the code and see how many compile errors you didn't find in the review. It is then safe to say that you missed that many logic errors as well. Don't know if I could ever force myself to not type "make" but it did make me think, and it would help with this problem as well.

Craig H
A: 

Can someone edit the Question so that it at least provides a summary of the article?

Outlaw Programmer
+2  A: 

This is a very passive-aggressive way of solving a problem which can probably be handled better directly. If you think you need better and more formal processes for code reviews, say so.

Wedge
A: 

I think the best way to deal with this issue would be to call the person out as being not attentive or not committed enough. Or simply stop doing code reviews with that person. Nobody likes a sneak, be direct and honest.

jonezy
+5  A: 

Of all the things you could spend your time doing to add value to your software development process, this has got to be very near the bottom of the list if it would even add value at all, which is extremely debatable.

I think the problem here is that you are misunderstanding the fundamental value add of a code review. The point of code reviews is not specifically to find bugs. You should have other processes in place such as unit testing, integration testing, continuous integration builds, QA tools/teams, etc to find bugs.

Code reviews provide value by identifying issues at different levels -- obviously you CAN identify specific bugs during code reviews, and that's great, but a code review can also help identify problems at a higher or lower level.

For example, at a slightly higher level someone may be able to point out that there is a requirement that all currency must be stored as cents in integers rather than as dollars in floating point and that the code under review fails to meet that requirement (while otherwise being perfectly correct).

At a a lower level, it gives more experienced developers an opportunity to give the more junior developers advice on how they could have solved problems better/more simply/etc. For example, perhaps the developer has a lot of nested for loops that could be replaced with some simple list comprehensions or higher order functions.

Likewise it might also give a younger more energetic developer a chance to let an older more experienced, but also more set-in-his-ways developer know about some new cutting edge technology that would simplified their code drastically, etc.

At the end of the day it also serves to just keep everyone up to date on what is going on in the code base, catch any egregious departures from what is expected, make sure everyone is on the right track, marching in the same direction, and to help the developers involved grow and evolve.

John
A: 

Just keep in mind that every Bug could be used against you. Either in a way where people simply think you're incompetent for making stupid mistakes, or in a way where people think you're actively sabotaging the project.

Outside of the Olympic Games, Sabotage and Cheating is something that is strongly discouraged and something that can actually backfire badly.

Michael Stum
A: 

There is a problem at my organization where final User Acceptance (UA) testers don't scrutinize applications until the final test phase after passing QA, when requirements changes are very costly (and unfortunately far too frequent).

On an iterative project, to make sure they focus on the product I have put "jesters" in the user interface that are deliberately ugly or poorly worded (e.g. Lorem Ipsum, orange backgrounds, Courier fonts, ...). The hope is that the UA testers will engage about the application and start word-smithing the text or UI immediately, rather than taking a cursory look at it and saying that it is "good enough" and waiting until the end.

Ed Griebel
A: 

Maybe not for code review, those are tough enough, but perhaps if you want to make sure the QA group it hitting your code hard enough.... here is a link with a section about Defect Seeding from IEEE Best Practices

the empirical programmer
A: 

Not only is introducing bugs always a bad idea, it will also make you look bad. If they find the bugs they will think less highly of your code. If you then explain to them that you just introduced the bugs to test them then they will also think less highly of you (because they'll think you're either paranoid or a liar).

+1  A: 

Depending on how this is done, I think that it might by awful/silly or valuable.

Several commenters seem to think that this idea is basically playing "gotcha" with your teammates, and I'm inclined to agree.

However, as suggested in the IEEE article, if your overall process is very mature then there can be a place for Defect Seeding. In the article, the implied context is one where the organization is concerned with measuring and monitoring the defect numbers pretty comprehensively, including arrival rate, closure rate, total (not just found) defects, etc.

The article talks about some of the risks with this, as have many commenters here. If your process is mature enough to justify Defect Seeding then there are various ways to guard against those risks.


A related notion is to ship a coverage-instrumented version of code to QA, though not the release candidate. The idea is to verify that the QA organization's tests actually exercise the code. Two pieces of information can be gleaned from this practice:

  1. QA Oversight: This is particularly useful when there is weak or missing traceability from requirements to tests.
  2. Dead Code Detection: See what code isn't tested; if your traceability is strong, then you can have some confidence that most/all production-needed code is test-covered, so uncovered code is dead. Or, in practice, this is a starting point for further examination.
Ray