Some say that a debugger is the mother of all evil. What do you think of this approach?

I have a friend at work, a colleague, who's completely against using a debugger whatsoever.

I asked him: So, you just write code without bugs? Is that it?

He answers: Of course not. Everyone makes mistakes, the difference is how you deal with them effectively, how you make sure not to make the same mistake again. When using a debugger, you may find your way to that bug, and you may fix it for the specific scenario you've witnessed, but

  1. you're wasting your time because all that time put into debugging can never be reused. It's a one time hack in the sense that if you have another bug later, you'll probably need to start all over again, and

  2. you've only solved this one bug, and while it might only occur for this specific scenario that you tested, you most likely did not solve a more general problem. That's because you're not thinking in generality, you're in a debugging mindset, not a general mindset.

Me: OK, fine, you don't use a debugger, you think it's a waste of time. What do you do when you find a bug then?

Him: When I find a bug here's what I do:

  1. Read my code. Understand it. Document it.
  2. If a class or a method or a function is not coherent refactor it until it is.
  3. Add asserts. Use preconditions, post-conditions etc. Asserts are very effective.
  4. Add logging. When the program runs it should tell its user what it's doing, like you're reading a book. Don't assume the user understands the code, don't assume you understand the code. Let the program tell you exactly what it's doing, you will not regret it.
  5. Unit-Testing. Except for the most trivial getters and setters, you need to test everything. Most bugs can be found while unit-testing, or while writing the tests.
  6. Code review. Have someone else look at your code. When he/she asks you questions you'll understand your code better. Many times I found bugs when trying to explain what my code is doing to a reviewer.

Me: OK, dude, that's a lot of things. Are you sure this is the best use of your time?

Him: True, if you have a single bug at 8 PM after a long day, and all you want to do is fix it and go home, you might get tempted to open a debugger and get rid of that thing already, right?

Me: yeah...

Him: Well, I think that this is when good developers show. A good developer needs to be self-disciplined and realize that: every minute you waste on a debugger is a wasted minute. You'll never get your time back. While if you invest your time smartly in documentation, refactoring, asserting, logging, unit-testing and code reviews you're investing in a brighter future. It might be that this evening you'll get back home late and that is indeed sad, but I also guarantee you that you are not going to regret this and in the next couple of days not only that your coworkers think highly of you, but also you'll have much more free time since at this evening you solved not only one bug, but also a design issue and five other bugs.

Me: OK, that's a bit extreme for me. I can see why you're saying that using a debugger is a very short-term investment and that professionals should make long term investments, that's cool. But, isn't it a bit too extreme? I mean is there any good time to use a debugger at all? What about, for example when you inherit the code and you don't even know how is it supposed to run?

Him: Dude, in my team I'd not want to have you. If you want to read new code, print it and take it somewhere quiet. A debugger is not a Kindle.

So, stackoverflowers, what do you think of this approach? Is a debugger the mother of all evil?

+47  A: 

In some cases you won't find the bug no matter how hard you stare at it. If using a debugger is considered bad, then I'm happy to be bad and actually get some spare time. If your friend is happy with his debugger allergy, that's his problem.

+2  A: 

So it sounds like when your friend has a bug he just changes things until it goes away?

I'm with you I think. He'd be far better off using a debugger to direct his activity towards the actual fault rather than changing everything he can think of and hoping the bug will disappear.

Paul Mitchell
There is a vast difference between "just changing things until it goes away" and examining the code do see what could go wrong and then changing it to eliminate that possibility.
Dave Sherohman
+2  A: 

Short answer: No

Long answer: Asserts/logging/debugging is all a type of testing your code. Asserts are only used in the most extreme cases. Object can't be zero. If you have them in a production software you are doing it wrong. Logging to find out what the user did. This is sometimes handy but not always needed. And to repreduce the bug you have to find out where the bug originated by inputting the same variables as in the log. It is time-consuming and you are making your code twice as long (logging each statement).

I test small portions of code using the debugger, stepping through a function. Logging is used if I or the user will ever read them, if not don't implement.

You don't log each statement ... you log each process. There are some tools which log method calls for you, too.
+5  A: 

It rather depends - at work last year, I was using Java and JUnit, and didn't need to use a debugger once. For the last few weeks, I've been working in C and often had to use a debugger - a segfault has much less information than a Java stack trace. Every case the debugger found would have caused a class cast or null pointer exception in Java, giving the exact information that the debugger gave.

Pete Kirkham
Your Java must have very simple methods (a good thing). When I get a stack trace from a NullPointerException, it tells me... that I need to set a break at that method entry and step through it. :(
Simon Buchan
+1  A: 

Different people work best in different ways, if his code is maintainable and off good quality then why change a winning formula, but that dosn't mean you should stop useing debuggers if they work for you. Although if his strategy really works it might be worth a try, maybe it works for you too.

Benjamin Confino
+1  A: 

An interesting philosophy. If it works for him, so much the better -- one of my colleagues does believe that code should just work, the first time and every time. He built both ends of an IR remote controller, in assembly language, and wouldn't you know it worked flawlessly from the start.

For the rest of us mere mortals, however, I'm a fan of using every tool at my disposal, starting with my brain, the best practices that have proven themselves over the years, my colleagues, and on the rare occasions an error makes it through a compile (heavy sarcasm here!), a debugger.

It's also helpful to step through code one line at a time, to ensure it's doing what you expect it to do. Sometimes you get the right answer for the wrong reasons, and it's important to understand that. Black-box testing is best left to the customer QA department.

P.S. I suppose any discussion of debugging is incomplete without a reference to printf(). So, just for completeness:

printf() : debugger :: darts : nanosurgery
Adam Liss
"Black-box testing is best left to the QA department" -- Ouch!!! I am of the opinion that you should make sure your code works before passing it on to the next person. I don't trust developers who don't think likewise. Their work assignments will be adjusted accordingly on my teams (if they last).
@Dunk: Agreed, hence the distinction between black-box testing (poking at the outside because you can't see the inside) and the debugger (which shows you all of the details you need to understand before QA and the customer see the product). Sorry if I was unclear - we're saying the same thing, no?
Adam Liss
@Adam - I'm not sure if we are saying the same thing. You can test full code coverage class by class and not verify much IMO. It's when you hook up those classes together is when you verify that they work as desired. Until that level of testing is done, you don't know if the code works in most cases
Plus, I usually work in formal process environments. Thus, any bugs that I ship to QA will get written up as a discrepency report. I don't want to see my name show up in those reports. Also, I would not think highly of a developer whose name showed up very often in those reports.
@Dunk: Let me try again: As you note, QA should validate the product; it's the developer's job to make it work. But QA generally tests from the "outside;" the developer has the additional responsibility of ensuring it's not working "by accident" -- that the internals work as designed.
Adam Liss
printf() : debugg**ing** :: darts : nanosurgery :-)
@Pax: printf() : debugging :: darts : nanosurgery-ing ?
Adam Liss
Yes, that's right, nanosurgerying is a real word, isn't it? :-) Actually my beef was that debugger was a tool while nanosurgery was a process (like debugging). But I didn't know what tools a nanosurgeon used (very small scalpel?).
Still, the conversations getting bizarre now, so we can just leave it as is, here's a +1 for your trouble.
@Pax: Thanks. My thought was that debugger and nanosurgery are both nouns; debugging is a verb. But I agree that it's still not quite right, and not terribly relevant to the question. :-) I can always count on your comments to make me smile ... and to think!
Adam Liss
+14  A: 

The steps above are all important and necessary for the development of proper code. However, I can guarantee that you could have done steps 1 - 5 (and sometimes even 6) perfectly, and you still don't know why your unit tests are failing. This is where the debugger can be a useful tool.

Your friend also risks resorting to "carpet-logging" where every two lines are logged to try and get some understanding of what on earth is happening. Inevitably (after 10pm) the developer will forget to remove the logging code and end up committing his code (with a few hundred pointless logging messages) into SVN. This completely pollutes your code, making it impossible both to read and to parse log messages for errors.

In my opinion, debugging should not be an alternative for unit tests, but it is certainly a tool to use when necessary. Most development environments provide really sophisticated debugging tools, including on the fly modification of variables, conditional watches, etc. All these things can help you productively trace bugs.


I don't know...If I'm too stupid to get my Unit test running, I need to debug it to see where I am actually going wrong. A few times my intelligence suffices to get that particular piece of code running with assrtions and all, in the other cases I will use the debugger to get me head (and subsequently the code) sorted.

+137  A: 

What bothers me about these arguments is the people who adopt a dogmatic approach towards software development. You must (or must not) do X or Y.

Back in the real world, X or Y is almost never a fixed requirement. Instead of taking absolute positions, every developer should look carefully at the goals and demands of each project, and then select the appropriate tools for that specific project. There's just no such thing as a "best practice" without any context.

The software development world needs less dogma and more pragmatism.

UPDATE: Reading the question again, another thought just occurred to me. Your friend is saying that a debugger is used to debug a program. But in reality, a debugger is used to visualise and hopefully understand the program. That feeds useful information back into the debugging process, but it isn't actually debugging. So maybe if we used the term "visualiser" instead of "debugger", there would be less resistance to using the tool.

Yeah, Totally Agree. Though I suspect all these dogmas were raised by highly pragmatic people who saw lots of less-than-competent programmers do many stupid things.
hasen j
I like this.. As people that only work with 1's and 0's it's quite understandable how many issues we have from not seeing the grey between black and white.
Robin Day
I suspect all these dogmas were raised by highly pragmatic people who wanted to remove some competition on their field of expertise. ;)
+1 For the "visualising" aspect. Never thought of that.
Helper Method
+261  A: 

Your friend is a perfectionist. That's fine, it's in his nature but I suspect he wouldn't last very long on a team that had specific deadlines for bug turnaround and fixing (at least not without tempering his perfectionism a little).

I can just see the conversation now:

Boss: Well, guys, we're in a bit of trouble because we're consistently missing our bug-fix targets. Any idea what's going on?

You: It's because we don't use debuggers any more, sir.

Boss: Really, why not? Don't they allow you to more easily locate your problems?

You: Yes sir, but Bill here has convinced us of the error of our ways. Nowadays, when a bug gets reported, we reread our code to understand it. And add documentation to it. Sometimes we even re-factor it if it's not as understandable as it should be.

Bill: That's right, and we'll even go and add unit tests, more logging and do code reviews to ensure we understand everything about the code.

Boss: And you do this rather than use a debugger that would zero in on and fix the specific things our performance is measured on? You know, the things I'm getting raked over the coals for at the monthly management meetings.

You and Bill: Yep, that's because we know our customers understand that good software is worth the wait.

Boss: Right, pack up your things, you're both fired. I'll get in some guys tomorrow who understand that IT is all about service delivery, rather than something that exists for its own sake.

Now that the fun's out of the way: No, a debugger is no more evil than a compiler. It's a tool that allows you to do your job.

And, if your job is endlessly re-arranging your code, maybe you won't need it. But I think you'll most likely find your job involves getting software out into the field as bug-free as possible and being able to respond in a timely manner when your customers complain of problems.

They will not understand Bill's Utopian vision, they will burn Bill's company to the ground and spit on its ashes.

+1: Pax, you always make me laugh while you point out an angle I hadn't considered!
Adam Liss
Oh, and compilers _are_ evil -- any programmer worth his salt uses `cat` and enters machine code!
Adam Liss
Funny... thanks ;)
@Adam, my wife doesn't understand my oft-dark sense of humor (which is one of the reasons I come here :-). @Ran, I'd be interested in your friends response if you showed this to him. I suspect he'd be less flattering of my opinions.
+1. I am with not using a debugger for EVERY issue, though, but it is VERY useful in some cases where I can't locate the bug otherwise (using unit tests and other things).
Amusing story, but you're missing the OP's friend's point that non-debugger activities are investments against future bugs. IOW, if you're putting in unit tests from the start, you don't need to stop for a week to add them when a bug comes up - they're already there (and the bug may not be).
Dave Sherohman
+1. The idealistic approach only works when bug fixing isn't a time sensitive endeavor. It usually is very time sensitive when the bug exists in production code.
@Dave, his friend was "against using a debugger whatsoever" even in areas where it's invaluable. I fully realize that unit tests are very good for development (but they rarely cover every bug that's found) - I concentrated on the maintenance side because that's where debuggers shine.
+++++++1 That is exaclty the point.
Well, seriously, Bill doesn't use "Test with Debugger"? :P
Jon Limjap
+1 for practical advice with humor :).
Hang on...whose rambling on too much? ;-)
This is the reason I love this place.
If I had a penny every time that a debugger helped me find out a weird corner case bug that I could think a year over and never figure out by myself...
Mario Ortegón
... I'd have 3 pound and sixpence? :-)
To be fair, I'm trying to discourage reliance on debugging tools within my team. They have a definite place, but their use should be in concert with a clear understanding of what code should do and how to work around issues in a production environment. It's not always possible to "attach to process," so-to-speak, and there are occasions where you need to think on your feet to handle support-related issues at the wee hours of the morning. I've been around developers who absolutely need a debugger to work through an issue.
David Andres
@Adam: real programmers read XKCD.
When there is a hardware issue I do not put in another component if one is faulty. I simply rebuild the entire OSI model from scratch so that I know everything is correct. When we fully understand everything from top to bottom there will be no hardware or software faults.
Ryan Christensen
I think Visual Studio IDE is Evil. all the colors are confusing. I'm writing my code in notepad (not notepad++, which is confusing too).I'm compiling in command line : csc *.cs. that's the right way to do it. it took me a week and now I have done "hello world". next: operating system.
@Hagai I think you just quallified to be my next "god of programming". That is, of cause, when I see your finnished opperating system that continously rewrites and recompiles itself to fit the users needs. With no colors :-P.
I think it has a lot to do with experience of the developer and code complexity and documentation. Sometimes reading code can be faster than using debugger when the developer is experienced. And there is no way you can (being allowed to) hook a debugger to an application in production if the issue is only able to be reproduced in that environment. Personally, I prefer reading the code first then using debugger as a fall back.
Bill reads his letter of dismissal. Understands it. Documents it...
Ian Mackinnon
+8  A: 

Often I use a "step through" on a debugger to just check that code is going the right way and makes sense. It's by far the quickest way of checking that you are not adding unnecessary steps or making the code do weird looping backflips.

Debuggers can be handy if you've just laid down 300 lines of code on a prototype and accidentally left off a line terminator or left out a period. Sometimes these things are hard to spot by eye.

I think if you're leaning on debuggers to make stuff work there might be a case that you're over using them but I don't really see any problem with using a step-through sense check and a quick "whoops" catcher on dev work.

Maybe I'm wrong.

+1: If half of the developers I've worked with were half as thorough as this on half the code they wrote, we'd have a far better product, not to mention a fantastic math problem!
Adam Liss
+32  A: 

One thing not mentioned so far is that debuggers are also great tools for teaching you more about how your own code, and often more importantly, how other peoples' code works. You can see what is happening under the hood, dynamically, as it happens. Your friend's philosophy to me is akin to an engineer designing a car without ever turning the engine over before the tires have been fitted. It just doesn't seem sensible.

I use unit testing, design by contract style asserts, debuggers, profilers, static analysis, and whatever else gives me the best quality result at the lowest cost. Debuggers aren't bad. Using them in place of proactive QA can be.

Shane MacLaughlin
+3  A: 

Hundreds of unit tests can run in less time than one needs to step through a method.

Now, writing those tests have a cost, so a compromise has to be found somewhere.

but when one of those tests fails, you can use a debugger to find out why.
...and if those tests require user-interaction?
@slim: if the unit tests are focused enough, one can generally fix a failing test without resorting to a debugger. For the remaining cases, putting a breakpoint in the failing unit test helps seeing what the code does.
@David: it's the agile community that is claiming that one does not need any debugger when the code is covered by unit tests. Their tests are automatic, fast, repeatable, isolated and focused. They code their tests along with the code (TDD - Test Driven Development). Such tests run very quickly, and can save from having to resort to a debugger: when you have a good it's quick and easy to recreate a problem with a new unit test. One run its unit tests suite a lot more when it is automatic.
+13  A: 

When I write code using TDD, I very rarely need to use a debugger. Because I run the tests every few lines of production code, I will notice immediately when something went broken, and I know what was the change that broke things, so I don't need a debugger to find out the reason.

It's only very rarely, when some obscure bug occurs and I have no clue why it happened, that I start up a debugger as the last hope. Then I put a breakpoint in a test that reproduces the bug and dig in deeper. Quite much can also be done by just adding temporary println() statements to print debug information at strategic points.

For example, in my current pet project (an application server with transparent persistence to object database - some highly complex problems), I have about 4400 SLOC production code and 7300 SLOC test code (Java), and I've used a debugger only once or twice. IIRC, one of those occasions was some bug in the system's internal database, related to locking during transaction commit - a hard to find bug.

Then there are also some kinds of bugs where a debugger does not help, but you can only use your own reasoning. Concurrency bugs because of inadequate synchronization are such. For example once I found a bug in Guice 1.0 where a method returned null even though it should never return null. The way to find out the reason was to read the code backwards and look at every assignment where that variable came from, while reasoning about the JVM's memory model that could that variable ever be null.

Esko Luontola
All true, but you do use it when you want to or need to, right? You're not being dogmatic?
Yes, I use it when I need it. I have nothing personal against debuggers, it's just that I very rarely have need for them, at least in the code that I myself write. On the other hand, when exploring legacy code, debuggers are very useful for understanding the system.
Esko Luontola
This is all good and well with a 5K line system. But can you run the test cases that often on a 250K line system? True, you can only run tests in that area, but most hard problems are caused by a slight bug far away from where you are currently working which causes some input to the current code to be different than expected.
+1  A: 

"I think that this is when good developers show" - let me guess: your friend is exhibit #1 of "good developer".

I don't disagree with the recommendations, but I think the very best developers know the rules and when to break them.

Sometimes I have a problem of needing to check my assumptions. The faster I fire up that debugger and find out what I've been assuming is correct has led me astray the sooner I can get on with it.

If a debugger will find that error under a tight deadline, I say "Let's debug."

+2  A: 

I use the debugger to "watch the gears turn," then capture what I learned in a test.

I tell people that don't want to test: "We don't have time to skip that step." I would tell your friend the same thing about debugging to understand the bug.

Don Branson
+3  A: 

Sounds to me like someone is cutting off their nose to spite their face. The debugger is the quickest route to comprehending what is going in the majority of cases. I fear that some people have been burnt by the debugger and rather than appreciate the trade-offs the debugger brings they've adopted a policy of "never again" which is kind of like throwing away your car because it breaks down once and leaves you in a serious fix.

The human mind has a tendency to add logic to any madness hence this "long term approach" this guy is using. Implicitly stating that your code isn't as clean as it could be or doesn't have enough tests as it should have if you use the debugger doesn't appear to be a thought through argument. Surely, if you use the debugger, do you have more time to do these things?!?

However you are quite right to state that it is a little "evil". Stopping the program execution destroys certain diagnostics information that is precious where logging will not.

But, I believe that the biggest evil is not understanding or using all the tools at your disposal. In some cases automated tools like the debugger rock, in other cases some logging is better. You just need to work out which is the best tool for the job.

+83  A: 

Ain't debugging just "reading the code" with the ability to also see real life values of variables?

And manipulate them, and step forward and back, and set breakpoints, and much more. It is such an awesome thing I can't imagine anyone not liking it.
And the ability to break only when a specific condition is true or false - ideal for long-running tasks.
Isn't "Ain't" a made-up word? :-)
Sure, but be pragmatic: as long as we all understand the message, ain't no problem. ;)
Ain't ain't made up.
Rich Bradshaw
Every word is "made up".
Ain't isn't even a "word"
Mel Gerats
Don't say "ain't" Your mother will faint! Your father will fall. In a bucket of paint!
"Ain't" is a perfectly cromulent word.
what about "yaint"?
+5  A: 

I find that in general, I agree with your friend with regard to how to prevent bugs from getting into your code. I also agree that spending less time in the debugger is a good thing. I'd quibble a little on the documentation point, because frequently the documentation -- even in the code -- is probably out of date if the code is under active development. On the other hand, using a debugger to find out why the code I just wrote to pass my unit test if it's not immediately obvious seems a lot better than going in and adding logging and assert statements (frankly this smacks of printf debugging) if the code doesn't call for it anyway. And, no, not everything needs to be logged nor does everything need to be asserted. In that case, I prefer using the debugger over adding the bloat to my code.

+3  A: 

There are different approaches to programming. I use a different approach. For me data, code and its behaviour is all part of the program. Live data. Not just dead artifacts like source code in a file. I'm a Lisp programmer. I play around with data, algorithms, representations, etc. I write partly working software that gets extended during development. I don't write the code on paper. I write the code against a partly working system which I change until it does what it is supposed to do (display an email, get an email from a mail server, talks to an NNTP system, renders the image in some way, displays the dialog nicely, ...). So debugging is a part of this. But there is also the larger vision that one may understand and program against a spec for isolated problems (given this data, format it some way to HTML). But then there is the outside world (example: parse some data out of HTML files coming from a bunch of websites). There are specs for outside systems. Sometimes. Often the outside systems implement these specs in some special ways. Some specs and their implementations are evolving over time. Some are tested against a suite - some are not. The more the programming problem has to deal with unpredictable behaviour, the more there might be need for debugging tools.

One also needs to understand that there is also not a single 'debugger'. The debugger is a bunch of tools that help the developer to inspect and manipulate the current state of a running piece of software. If all the developers problems are such that he can predict the behaviour of a software system from static text files, he might never need one. As soon as the system gets larger, is designed using different principles, has lots of communication and lots of internal state, then the debugger gets useful. Very useful. For me the debugger is part of the tool suite for interactive software development. Those who don't develop interactively might have less need for debugging tools.

In fact I develop more complex software by developing debugging tools in parallel. Debugging tools that are adapted to the problem domain. For example for graph algorithms, I might write a special visualization routine to nicely render runtime representations of graphs.

Rainer Joswig

I think it's useful to point out that modern debuggers can be used for far more than just tracking down bugs.

It's a fair bet that the majority of your projects won't be isolated and will rely on external code, libraries that you haven't written (or which are not perfectly documented), or will interact with potentially unreliable services in the cloud. You could 'code blind' and hope that you cover all the potential side-effects in your unit tests or you could use your debugger to find out the exact interactions required to get your software working (and, yes, then produce your extra unit tests!).

If you're doing advanced code using threads or trying out a new algorithm that doesn't already have a reference implementation, I'd hate to have to exclusively rely on peer-review or reading my printouts. Sure, code defensively, but don't deny yourself possibly vital data by rejecting a solution out of hand.

Oh, and are you we going to get some feedback from your friend on this thread? ;)

Dave R.

No, the debugger is not the mother of all evil.

Dana Holt
Because everyone knows that Saddam and Osama were the mother and father of all evil ?
+35  A: 

No, dogmas are the mother of all evil.

You should also consider that logging will change the behavior of your program if you run multiple threads. Use each technique where appropriate. I use the debugger to find and understand the source of the problem, not to fix the symptoms.

Simon H.
Of course, using a debugger can change the behavior of a multithreaded program as well.
Dan Olson
Very true of course, but if you have a lot of logging in your code during development and then remove the code for deployment you could encounter many new bugs.
Simon H.
+1 for "I use the debugger to find and understand the source of the problem, not to fix the symptoms." Excellent point.
Daniel Pryden
As the man said, "Isms in my opinion are not good. A person should not believe in an ism; he should believe in himself."
Doug McClean

As everyone has said debuggers are a tool like everything else.

Smart developers use their brains and don't always follow dogma. If the right tool is the debugger then that's what they'll use. If the right tool is more logging then that's what they'll use. The argument that logging/documentation/asserts is always better than debugging is exactly as flawed as the other way round.

Also, I don't like asserts. When they go off, which they do in production code, they pop up their own GUI to the user, who has no idea how to handle them. They also don't leave a stack trace, and developers who rely on them think they never happen. I think they're evil and should not be used. Throw an exception instead. However on the odd occasion I have used them. The right tool for the right job.

Cameron MacFarland
Asserts are not supposed to be used in production code! If you have condition that always needs to be checked, then use `if` and throw or signal error in some way. Asserts are for catching errors early during unit testing or falling into debugger during monkey testing.
+5  A: 

Most problems people have with the debugger is that people don't know how to use it

+18  A: 

I would say your friend doesn't understand debugging and debuggers. Debuggers are tools to help you focus and get clarity on what is going on in the actual environment that the code is running in. You can sit back and philosophize all you want, but it is meaningless without the knowledge of what is actually going on. Why guess? To me it is more of a hack and a waste of time to throw tons of logging statements in your code printing out what you think might be wrong.

I want people working with me using a debugger when needed, rather than guessing.

+39  A: 

Personally, I find debuggers to be highly overrated, but I wouldn't go so far as to call them "evil". If you like them and find they work well for you, then more power to you, but I tend to agree with the following quote:

As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient. Kernighan and Pike, "The Practice of Programming"

Dave Sherohman
Agreed. +1, great quote.
I agree completely (but). I interpret the quote as they use a debugger to look at specific details related to problem rather than just stepping through code. That is the way I use it. You can tell the quote is kind of old because you rarely have to single step to the critical section anymore.
Amazing... didn't they write that when the cost of recompiling was several orders of magnitude longer and more costly than it is today? I'd hate to live in that world.
Chris Kaminski
@chris things weren't really all that dire in 1999
single stepping is probably the least useful thing a debugger can do. I don't know why there is this persistent misconception that that is the MO for using a debugger. If you're from the printf school you're not going to put a printf on every line, why would you think doing the same thing in a debugger is what people automatically do when given one?
Logan Capaldo
@mbarnett: I don't know. I know at least one program (Pro/Engineer) that was still taking upwards of 8 hours to do a complete build in 1998. Granted not every statement results in a complete build, but a link operation was at least 20 minutes back then. We had all sorts of compilation unit tricks for keeping compilers and linkers from breaking when building ProE. It was quite the monster.
Chris Kaminski
+7  A: 

Define "find a bug"? In complex systems, the path from "someone notices that something is going wrong" (when a user or tester or test case finds a bug) to "when the engineer fixing it works out exactly why it went wrong" (when the coder finds the bug) can be a long process, and can be the hardest and longest part of fixing the bug.

If a debugger gives you the insight that you need to go through this process faster, then it's a good tool. In some cases you can get by without it, but that's no reason to be dogmatically against it. Debuggers help you understand what's really going on. But if you always need a debugger to understand what's going on, your code could suck and your colleague would have a valid point.

Unit test can be a good compliment to the debugger rather than an alternative to it. The right unit test can get you to your breakpoint near a crash quickly and consistently, but running the whole program to exhibit the bug might be a long manual process.

There's often more than one fix to the bug. often a number of things have to go wrong or pass undetected before the software fails. Making a single change to "fix the bug" may be adequate, but often you can do much better, using the bug as data on the weak spots of your software.

So when the coder finally finds the bug, there are a number of pro-active things (like those that your colleague suggests) that they can do before fixing it, such as:

  • Put test cases in place that exhibit the bug, to detect if the bug recurs.
  • Think about similar inputs or related bugs that could occur.
  • Put internal tests (asserts or exception throws) in place to catch invalid conditions earlier or give better information if it happens again.
  • Review the buggy code. Where there's one bug there could be more.
  • Refactor for clarity
  • Better internal documentation, e.g. if there was a misunderstanding of how to use a piece of code.
+2  A: 

I think it is a mentality issue and depends on their background. Like some people will never use an IDE and rely on notepad.


A good debugger makes it easier to find a problem location. However, in the ideal case, you then return to the code, with the new information highlighted by the debugger and stare at the code, until you understand why it, as it stands, produces the wrong result and then fix the underlying issue.

The main difference between thath and logging is that the debugger route doesn't (necessarily) require a rebuild for every new point you want to instrument. But unless you have done a debug build, it may require initial recompilation anyway.

+11  A: 

Sigh. If I go to the men's room, that's time spent I'll never see again. It doesn't solve anything in the long run, because I'll have to go again in a few hours. Just like using a debugger: I may accomplish something useful right now, but there's no long-term appreciation potential. Forbidding the use of the debugger is similar to locking the men's room door.

Except, of course, that there is some long-term appreciation potential in using a debugger to find a bug. Once I've found a bug, I can consider where else it's likely to be. I can remember the bug and the symptoms, and maybe find it later. I can benefit from finding out exactly what a certain bug is regardless of how I found it out.

David Thornley
IMHO your toilet metaphor is very poor. You have no other choice in this case, while in the debugger case you do.
Eran Harel
+2  A: 

It doesn't have to be either-or: Use the debugger to step through to figure out what is happening, and if it makes sense to, add a unit test to make sure the bug doesn't come back.

James Kingsbery
and debugging a unit test and not starting the whole application in the debugger and hope to trigger a breakpoint minimizes the time spent in the debugger a lot. So I try to combine this two steps instead of adding a logging mess
+5  A: 

One of the risks of using debugging as a primary bug-fixing technique is that you can end up fixing targeted symptoms without fixing underlying problems. This is dangerous because each little patch adds unintended complexity and you can end up with a system that is no more robust or reliable but is much harder to maintain and fix because so much of it is held together with bailing wire, gaff tape, and chewing gum. This is not the route to quality software.

Nevertheless, your dogmatic friend is in error in shunning all debugging. There's only so much you can understand about the way your code works in vacuo. When your code malfunctions in the wild that is a prime opportunity to understand exactly what your code is doing in situ. In theory: theory and practice do not differ; in practice they do. In the real-world with real customers using your software there are conditions that can occur which you will never have anticipated. This will ALWAYS be true. The diversity of the world vastly exceeds the human mental capacity to model it in the abstract. A bug is your code talking to you, when that happens you need to listen. You need to dig in and find the cause, and this will lead to just as much, or more, improvement in understanding your code as reading code listings in a comfy chair by the fire at home.

What you do with that understanding, once acquired, is a different matter. Pursuing the defect to its root cause, fixing the underlying system defect, adding regression and unit tests, increasing documentation, and refactoring problematic code are all excellent ways to deal with code defects, but ignoring a key technique for tracking defects to their source is silly.

+5  A: 

Your friend is a little over the top. But one thing being overlooked here is that most debuggers aren't very good (for example, most debuggers don't allow you to travel backwards in time), and most programmers don't use a debugger. I think these facts are correlated.

When debugging C code with bad pointer invariants or other problems with linked data structures on the heap, I noticed a quantum leap in productivity when I moved from gdb to the Data Display Debugger (ddd). I'm now much more likely to use the debugger for these kinds of problems. But it still pisses me off that if I accidentally run the program too far, I can't easily go backwards. Time travel in debugging has been a solved problem for over 15 years.

So, no, the debugger isn't the root of all evil. But it's also not that helpful and it can be a distraction.

Norman Ramsey
+5  A: 

I agree with 80% of what your friend had to say. But I do take care to differentiate debuggers with profilers.

It is VERY rare that I have to fire up a debugger. Usually, if you see me using one it means that I am trying to nail some kind of pesky concurrency issue. Even then, I'd rather run the code in some kind of virtualization simulator (Simics, for instance) that offers me better tool, or QEMU which offers me introspection with a little hacking. There are times when I suspect my compiler has just optimized something away, in which case the debugger is handy to prove my understanding of the asm dumps.

I've seen people who type 'gdb' the very second after seeing 'segmentation fault', even without looking at the code first or running it through something like Valgrind. That kind of tactic becomes a vicious cycle that takes years to perfect.

It's often faster to use asserts, even printf debugging than it is to step through your code using some kind of a debugger. As Norman said, until I can actually step backwards, I really don't have much use for GDB or others.

The rules I set for myself are simple. Every time I make a change or introduce something new, I:

  • Make sure it actually works
  • Run the program through Valgrind
  • Give the program unexpected input or usage, then replay steps 1 and 2
  • Commit a series of small changes that mount up to the revision I just made

This assures me (to a high degree) that any bug I find is most likely the fault of the current or previous change set, so its easy to find. 8 out of 10 times, if you are turning to a debugger, you're doing so because you have no idea when the bug could have been introduced or where; which points out a deficiency in your development practice.

Again, when I say 8 out of 10 times, I also mean 8 out of 10 bugs. Concurrency issues are in a whole other ballpark than dereferencing a bogus pointer in a single process.

Also, I work mostly with C (not C++, etc.) so my methods and thinking will really differ from others.

Tim Post
+2  A: 

Add logging. When the program runs it should tell its user what it's doing, like you're reading a book. Don't assume the user understands the code, don't assume you understand the code. Let the program tell you exactly what it's doing, you will not regret it.

To me this says he uses printf debugging essentially. What's the real, tangible difference between "let the program tell you exactly what it's doing" and "let the debugger tell you exactly what the program is doing"?

The benefit of the debugger here is that, while your time spent in it isn't reusable per se, you have the ability to iterate much more quickly. Perhaps you don't know if the information you're adding logging for is useful or not. The debugger can tell you that in a fraction of the time.

I've worked with people whose first step when encountering a problem is "look over the code". And it does work at times, but then there are also times when someone else can load up the debugger and catch the problem in a couple of minutes.

Debuggers are not the mother of all evil. Unwillingness to use a helpful tool because it hurts your ego as a developer is the mother of all evil.

Dan Olson
+2  A: 

I have been programming for over 6 years and I still don't know how to properly use a debugger. "cout" has always served me just fine. Now that I think about it though, a debugger would be a lot quicker than trying to manually do a binary search on my code to find the line it's crashing on.

+1  A: 

No, the debugger is not the mother of all evils.

There are much greater evils in this shattered world, and not all of these are daughters to a debugger. Even if we concentrate on bad code and bad programmers, it is pretty evident that you can produce one, or be one yourself, without ever using a debugger. Examples are abundant.

There are cases in which programmers produce obfuscated code, and use a debugger to understand what it does - "gain insight into the way the code works, and deeper intuition of the clockworks of the inner gears", as some may say. But, the infinite creativity of programmers cannot be stopped if they are forbidden to use a debugger. It was said that it is twice as difficult to debug than to code. So, if you apply all your talents in producing "clever" code, then you will probably want to use a debugger to compensate for your innovation and cleverness, but even if you are not given one, it would be tough to stop you from being clever and creative - in the bad sense of these words.

In my opinion, the way to understand the code is to make its understandability self evident - and yes, documentation, assertions and the such do help. Put your talents in producing understandable code, and you will prosper.

It is not that I never found a use for a debugger - this has been like this only in the past 20 years or so, in which I never found a debugger indispensable. I slowly gravitated towards the habit of not even installing a debugger. In later years, the tickle of the fiendish, shortcuts-seeker, desire to use a debugger was much lesser with the advent of excellent unit testing tools. When I have a bug, I simply write a test case to make it reappear, and if the test case does not reveal the fault, the emperors' old technique of divide and conquer is unbeatable. Write a test case for each of the components, and you will gain a real intuition on what the code does, and more importantly, what the code should do. After all, what the code should do is more important. You have much greater control of changing the code than changing the requirements.

I was not always this fortunate. Some of you may be familiar with a hack I once wrote - the terse editor - a full screen editor, featuring search and replace, cut and paste and the such. The neat thing about this hack is that the binary spans only 4096 bytes - including the text of a help screen. There were many nasty 8088 assembly tricks in there, including, dare I reveal the disgrace, self modifying code. But, even in writing this, I never used a debugger. Whenever I noticed something was wrong, I simply added documentation, explaining to myself where the code should be doing.

The real challenge is in dealing in cases in which you are not at liberty of changing the code - that is when you have to fix other people bugs. I noticed however that no one ever complains if you document his code, add test cases, assertions, enrich the code with reports etc. Just be careful of programmers' rage when they see their code refactored.

--Yossi Gil

PS I am probably the friend Ran was telling you guys about.

+2  A: 
+1  A: 

Linus Torvalds isn't a fan of debuggers:

I don't like debuggers. Never have, probably never will. I use gdb all the time, but I tend to use it not as a debugger, but as a disassembler on steroids that you can program.

If such a bug happens in production DESPITE his efforts, then the question is WHY, and unless you have excellent logging or psychic powers you occasionally have to get information not accessible unless you peek inside.

That is, a debugger.

Thorbjørn Ravn Andersen
+1  A: 

I wouldn't say debuggers are "the mother of all evil." However, I almost never use them myself; I use your friend's techniques. Especially since I took up heavy automated testing, I find a spend relatively little time debugging things.

Curt Sampson
+2  A: 

Your friend is right (to a good level of confidence). When you debug, you are using your time to fix a bug, not to fix and prevent it in the future. If you have to start a debugger, it means that your testsuite is too poor or unexistent, and this is a clear sign of trouble.

Stefano Borini
+2  A: 

Debugger Pro's

  • can find error very quickly - important when delivery is time-critical
  • helps give a quick glance of how inherited code works, step-by-step

Debugger Con's

  • can be overused/abused
  • encourages "firefighting" rather than writing good code, (i.e., one instance may be fixed, but root issue may still exist)
  • tend be used in lieu of Unit Tests/Logging/Refactoring

PintSizedCat, I think your friend is extremist. Maybe debugging isn't the greatest personal investment, but sometimes we have to make the deadline and concentrate on improving ourselves afterwards.

+3  A: 

I used to jump on the debugger as soon as I saw a bug, often wasting valuable time, but now before I start stepping through code, I make sure I think first about the issue, I understand at the very least the basics of the code I am trying to step trough, and once I have a clear picture in my head of what I want to look for, I use the debugger. There are some instances where the debugger can't help, (timing issues for example) and then you have to rely on other tools.

It's not the tool itself that's good or bad, what matters is how you use the tool.

To paraphrase "a debugger is not substitute for thinking".

Some say that a debugger is the mother of all evil. What do you think of this approach?

That it is incorrect. A debugger is a useful tool, and it's foolish and antieconomic to refuse using it.

Daniel Daranas
A debugger is a useful tool *if* it is a useful tool. Many times it's better to use the above outlined "no debugger" methods.

In my academic experience, the use of a debugger was something that I tried to stay away from. Not because I wasn't allowed to use them - but because I figured that at this stage in programming career, where I'm learning how to do things properly, debugging my code 'by hand' was a much more useful exercise than using a debugger to do it for me. Or at least I felt like it was. For one thing, it made me a more cautious programmer, and it improved by bug finding and fixing skills. It also made me better at reading code.

Of course, that was in school. Though I'm still in school, I can safely say that if I was in a work environment, I wouldn't hesitate to use a debugger. Time is money, after all.

+1  A: 

In most cases, there is no debugger installed on the user's machine! And the installed product is always the release version which doesn't contain debug information. So don't count on debuggers, sometimes, the only way is logging.


When I was a TA for a first-semester programming class, the first assignment was using a debugger to step through Hello World with breakpoints and watches.

I also made a point that if I saw a getline() at the end of main(), your getting points taken off. Does ls pause before giving your prompt back? No. And neither will your command line app.

Solution? Set a breakpoint on the closing curly brace of main if you want to examine the program's output before it closes.


I guess the real difference comes down to the following two attitudes, one good and one bad:

  1. I'm not really sure this is going to work, and I'm kind of in a hurry, but if it doesn't work, I can just fire up the debugger and find out where the problem is. (bad)
  2. I first want to understand this problem completely before I think up some possible solutions and choose the best one; if worse comes to worse I have a debugger I can use as a last resort (good).

Code reviews provide good selection processes to filter out the bad implementations and allow the good through.


I think, in general, it's not a perfect approach.

The problem is step 2.

Refactoring is likely to introduce new bugs.

You should find the bug, fix the bug, check in the fix, then add your new unit tests.

Then you can proceed with steps 2, 3, and 4.

Tim McClarren
Except you should add the unit tests *before* you fix the bug (and, implicitly, run the the tests to make sure the the *do* find that bug).
By definition ( if you are changing behavior you are not refactoring the code.
David Winslow
That's about "functional behavior". That's the goal. If you think most programmers are fully capable of refactoring code every time reliably without introducing new bugs in the process, I've got a bridge you might be interested in.
Tim McClarren

This is silly. A debugger is a tool, and if you've designed your program to respect certain invariants, but neglected to write a test to check a corner case of one of these properties, a debugger will help you quickly identify precisely which invariant was violated and how. You can then fix the violation and write an appropriate test.

The point is to design code around invariants, but a debugger neither encourages nor discourages this behaviour. It's a discipline a programmer learns through hard-won experience.

+1  A: 

Whenever I spend any amount of time writing a lot of new code I play a game to artificially raise the cost of failing to write code that works perfectly the first time. There are at least three levels:

  • compiler caught bugs (typos, etc.)
  • bugs caught via an assert check
  • bugs which trigger logged error conditions
  • uncaught buggy behavior including resource leaks and concurrency

If the last level is ever triggered you're doomed (unspecified disaster).

As time has gone on the amount of bug fixing and debugging time needed continues to drop as I activly worked to develop strategies to avoid introducing errors.

In my view not using a debugger just wastes your time and still DOES NOT provide enough incentive to continue to strive as much as possible get it right the first time.

+1  A: 

As I recall, Linus Torvalds isn't a fan of debuggers either, for many of the same reasons.

I, for one, am a finite and limited human who can use any advantage I can get. But if you're sufficiently qualified to be in the upper echelons of programmer-dom, be my guest and jump without a net.

+1  A: 

Reading these answers, I don't understand why people consider a debugger as some sort of one-stop shop for a quick fix.

If a problem occurs, the debugger is simply a tool to have an immediate overview of what variable contains what, allowing you to more quickly map the situation.

I recognize myself in the 6 steps given, I do that all the time... WITH the debugger as help.

I don't get how people seem to be under impression that if you used a debugger to find the bug, you absolutely have to write a short work-around instead of doing the needed work.

It's your own fault as programmer to take the easy way out, just because you found it a bit more quickly. Just use the debugger to find it and any other variables you might need, and then do the needed work.

Debugger does not equal quick fix. At least, it doesn't have to, unless you allow it.