I'm wondering from your experience, what is the most useful techniques for finding (or preventing) bugs? Or to ask this another way, how do rank these techniques in your own experience?

  1. Continuous integration
  2. Code Reviews
  3. Unit Testing
  4. Manual Testing
  5. Focused testing days
+3  A: 

Thinking .........

Martin Beckett
I totally thought that was a placeholder answer whilst you typed your actual answer, then I errr, thought some more.
Dominic Rodger
This won't and never will prevent bugs (at least, not by itself).
Pascal Thivent
@Pascal Thivent - yes it will. It will not guarantee no bugs get through (as in fact, nothing can), but it will stop an awful lot if you think before you code.
Dominic Rodger
If all thinking prevents you from writing code, then of course all code will have no defects (exploiting the fact that the universal quantification over an empty set is always true :-))
The problem is that everyone thinks that he has put enouth thoughts when he is coding. There is no any coder which would accept that he has done something without thinking. So, stating "think" means actually, use some design or other techniques to prevent yourself from producing maximum bugs. Thats why -1.
-1: Your answer is technically correct, but it's incompatible with the spirit of the question.
Jim G.
Err it was a place holder - then I got called away on a problem that took all day. and I can't remember what I was going to say ;-)
Martin Beckett
+3  A: 
  1. Unit testing
  2. Continuous integration
  3. Manual testing
  4. Code reviews
  5. Focused testing days
Dominic Rodger
Interestingly, this seems to be at least partially the reverse order than when you order by average percentage of bugs found for each technique. At least I remember reading that code review detects many more defects than unit testing.
@Johannes Rössel - Huh. Not sure if that's (a) just me having a wonky perception of where my bugs get fixed; (b) where the sorts of bugs I create get found; (c) peculiarities of my work environment; (d) the study being wrong; or (e) something else.
Dominic Rodger
@Johannes Rössel, @Dominic Rodger: Your conversation is worthy of its *own* thread.
Jim G.
+26  A: 
  1. Thinking before coding
  2. Fixing bugs or potential for bugs as soon as I see one (not leave it for tomorrow)
  3. Test each little piece of functionality that makes sense on its own
  4. Have it as principle to avoid quick hacks
  5. Use defensive programming techniques everywhere where possible (immutable objects, read-only properties, type constraints etc.)
  6. Write self-explanatory code and comments where code may seem not easy to understand
  7. Avoid sophisticated expressions that require a very fresh state of mind to comprehend
  8. Avoid dense blurbs of code, better make it more verbose to avoid misunderstanding (not saving a few characters on code block brackets helps greatly)
  9. Avoid duplication of code. It will be later next to impossible to trace all instances.
  10. Avoid magic constants in code. Even if your 200% sure you'll only need it in one place, extract it to a constant. Will do you good.

Expanded my answer here:

Wise programming techniques for writing quality code

Developer Art
+5  A: 

Hire good programmer

+4  A: 

I'd rank them as:

  1. Proper inter-group communications between the Requirements guys, the customer, and the other developer groups. Most bugs we've "found" have been traced to either improper documentation of user wants, needs, or environment.
  2. Unit Testing
  3. Code Review
  4. Manual Testing
  5. Focused testing days

Our company doesn't do continuous integration so I can not rank it.

+1: For emphasis on proper inter-group communications.
Jim G.
+1 inter-group communication. Happens over and over.
Tony Ennis
+1  A: 

I would rank them as :

  1. CheckStyle (and consort)
  2. Unit Testing
  3. Continuous integration (for non-regression)
  4. Code Reviews (can be informal)
  5. Manual Testing
+13  A: 
Pascal Thivent
-1: For all I know, 'poka-yoke' might be an effective way of preventing bugs, but I think you owe SO readers a synopsis of how you can apply 'poka-yoke' to software development.
Jim G.
@Jim G. I've updated my answer. Let me know if you find this more satisfying (in which case, you are allowed to change your vote).
Pascal Thivent
+1: @Pascal Thivent - Nice update.
Jim G.
+2  A: 

Unit testing has had and always will have the biggest impact for me.

Everything else either doesn't work, breaks under stress or doesn't yield enough ROI.

Aaron Digulla
+1  A: 

During development I use a combination of test-driven development and static analysis of the code (e.g. QAC or QAC++).

+6  A: 
  • Understand what the problem is.
  • Design a solution.
  • Bounce the solution off of some coworkers who understand the problem domain.
  • Rework the solution design if necessary.
  • Keep code small. Classes that aren't monolithic. Functions that aren't hundreds of lines long.
  • Write test code. It will keep you honest and help find the bugs. Don't skimp on this.
  • Don't try to be too clever. Some of my worst problem code was the result of my trying to be clever.
  • Don't try to juggle more things than you can. Context switching between projects can be a productivity killer.

One other thing:

  • Manage expectations. That means don't let the schedule force you into slinging a bunch of crap code to meet an unrealistic deadline. The test code goes out the window most of the time when the schedule gets to be unattainable. This generally means buggy software gets delivered. You aren't going to deliver perfect software but the goal is to ship the best you can.

Other things help but ultimately knowing what you trying to solve and not letting the drive to be "done" derail your train is the key.

Just my two cents.

+2  A: 
  1. Code Review (Both in person and informal, and in a formal deferred manner).
  2. Unit Testing
  3. Continuous Integration
  4. Manual Testing
  5. Static Analysis tools

I am a huge fan of code review, and from my experience, it is where we've seen the most bugs fixed. While doing a deferred code review by using tools like the new code review module in FogBugz is very helpful, I've seen a lot more bugs found (and epiphanies experienced) while doing in-person review.

For those doubting the power of code review, I highly recommend trying out the Rubber Duck Debugging technique (and hey, it works even better when your rubber duck is a fellow coder). Just going through your code while explaining it is a great way to spot logical errors and some mistakes. I added a small honourable mention at the end for static analysis tools, which may not be able to debug business logic, but it can be handy to spot simple mistakes and inefficiencies.

I never heard of Rubber Duck Debugging, but one boss always required us to explain how our web pages worked to a little Homer Simpson doll. DOOOO-NUUUUT.
Tony Ennis
+1  A: 

Preventing bugs:

  • Think before coding
  • Avoiding cleverness
  • Avoiding cleverness
  • Pseudocode before code

Finding bugs:

  • Testing
Paul Nathan
+1  A: 

While writing my own code

  • Using the call stack
  • Using asserts
  • Not 're using' variable name. I use to have var t=XYZ; ... t=ABC; I find code is better when i dont do that. It is also easier for me to see which variable went wrong.
  • Libraries. Since i was a c++ program i spent lots of time over engineering and writing code myself. Now that i switched to .NET (IIRC perl has great libs and python isnt terrible) i spend no time on writing libs and on the occasional open sourced or user lib (a user that isnt me) i dont try to rewrite code even if it does look terrible.
perl's libs are amazing.
Tony Ennis

Always look at the obvious first when trying to find a bug.

+6  A: 

My most useful technique for finding bugs is using ASSERTs.

I have found many very subtle bugs (and also many not so subtle) this way. Bugs that would otherwise go unnoticed but nevertheless cause e.g. incorrect results. For instance only for particular kind of input. In many cases it would not have been possible to predict the existence of a bug, even with a lot of thinking.

ASSERTs are available in one form or the other in most environments. I have used them with C, C++, VB.NET, C#, Python and Perl. The action when an ASSERT evaluates to false does not necessarily have to be to quit the program. They can also be logged to a file or a dialog box is shown (stopping execution). The latter is the default for Visual Studio (desktop applications only?).

Peter Mortensen
Amen. The only thing better than asserts, is more asserts.
Mads Elvheim
+1  A: 

Thoroughly understanding the problem being solved.

Greg Mattes

I love the ideas of IMVU and continuous deployment. Of course it requires a ton of automated testing, but also severely limits the impact of issues that do arise.

Here's the blog. Good stuff:



As you debug, keep a record and then back out your fix to verify that you've properly fixed the problem.

This is the simplest thing that I have found that helps me to debug and make sure that I've fixed the problem.

Quite often it is not that last thing that fixed the problem but it can be a mixture of various effects. Performing this simple check will show you the actual fix.



Rob Wells
+1  A: 

Preventing and finding bugs starts from the start of the project ebfore programmers start coding. The few things that everybody should keep n mind are as follows:

  • requirement analysis -> requirement should be analysed properly with the customers and if possible with the system engineers with developers and testers attending the same

  • brain storming between the developers and testers on how they understand the requirements

  • Plan to be done by both developers (Design) and testers (Test plan)

  • After coding the developer should start the Unit testing and the tester should help in giving new scenarios that are missing

  • Lastly the tester will start his testing alongwith developers

One thing we should keep in mind that it can't be eliminated but can be reduced with the joint approach of developers nad testers.

+1  A: 

Fuzzing. Highly depends on your line of work, though.

Michael Foukarakis
++ Similar to monte-carlo testing, IMHO.
Mike Dunlavey

I wish I were as good at this as I should be, but here are methods I try to follow:

  • Since I cannot think of a bug that was not an incompletely-implemented feature, I try to minimize the number of separate editing steps needed to make forseeable changes. This involves esoteric tricks like differential execution, and other DSL-like techniques, but that comes at the cost of a learning curve.

  • Coverage. Coverage tools bother me, because they seem to be too hands-off. I try to insert in every routine and every branch statements that look like: COVTEST("name of routine", "some comment") and implement this in such a way that it records the fact that it has been executed (but only once). When the app finishes executing, the record of statements encountered is sent to a file. An external grep is done to find all such statements in the source code, and an external utility can generate a list of all such statements not yet executed. This directs me to test those parts of the code, and is very useful.

  • Monte-Carlo testing. I have a problem with unit-testing as it's widely understood. It seems to exist mainly to see that complex data structures aren't screwed up. My projects de-emphasize data structure and emphasize language, so the combinatorics of cases to consider can seem to be impossibly large. What I've found useful is to have a random problem generator, along with a parallel (but simpler and less efficient) implementation, to make up problems at random, run them, and compare the results. When this works well, the preponderance of bugs found gradually shifts from the product code to the test code.

These different methods address different possible errors, so no one of them is sufficient. In addition, I'm lucky to have very good (and patient) QE people who test the code, because every progammerer has blind spots to possible problems.

Mike Dunlavey
I tink you missed the point of unit testing then: Unit testing is mainly to make sure you don't screw up *code*.
@RCIX: I think there's a world-view disconnect happening here. Many programmers approach programming as being all about class hierarchies with interconnected complex data structure and message-passing. Those are so easy to screw up that unit tests are certainly needed. I never learned not to do things in a minimalist way, where there was not much that could be screwed up, therefore not much to guard against. Usually my screw-ups are in the nature of not foreseeing everything the users might want, or in interfacing with overcomplicated, underdocumented class schemas.
Mike Dunlavey

All the listed methods are valid points. There is nothing like testing your code thoroughly. Find all possible distinct scenarios you can think of testing. Good documentation also helps when the product is a software library or an api.


I believe that a wholistic answer to this not at all subjective question must explore different types of bugs and their causes.

I wholeheartedly agree with most previous answers here, but I think they are focused on and limited to the coding process (i.e. the process between receiving the specification for a component on one end and integration on the other end), and to improving the quality there. It is not difficult to see that a better process for writing code will lead to better code, i.e. using best practices for structuring/desiging the code, unit-testing, and releasing will reduce the chance for error on the developer side. Avoidance and early detection strategies are a part of this.

However, in my experience, coding errors make up a relatively small portion of the whole set of defects. In between the scoping process and the delivery of the final product in a full-lifecycle software project, bugs that result from incomplete or incorrect specifications are much more difficult to detect and avoid. Developers can argue that such defects are not really bugs, but from the user's or customer's perspective, they are of course. As developers, we have less control over this type of bug, though.

The best quality code cannot replace quality in the QA process. A strict separation of roles between developer and tester improves the detection rate. Testers should work with subject-matter experts on developing test plans that cover all use cases. All parties involved in the project need to share the view that QA is a critical component to success, that it takes time, and that shortcuts will lead to trouble.

As trivial as it sounds, frequent and formal communication between developers, testers, and the customer/end users throughout the project is probably the most effective strategy to detect and avoid bugs that are not a result of coding errors.


My personal favorite is the check all code pathways. While one interpretation of this is white-box testing, the other is to ensure that the requirements are complete. This can mean asking a lot of questions and sometimes getting a, "I never thought of that..." responses and possibly working things through and handling the fact that the user may not have thought things through, the programmer may help finish resolving the fuzzy requirements.

JB King

related questions