views:

270

answers:

3

Sometimes I accumulate a large mass of breakpoints from different debugging sessions in different places in my code. How does the debugger efficiently know when to stop for a breakpoint? It can't possibly be stopping at every single line to check the line number and source file name against a potentially long list of breakpoints, can it?

This is the Java debugger in Eclipse, but I presume the question applies to any debugger.

+6  A: 

The strategy used in many debuggers (I don't know about Eclipse) is to put a patch into the code at the point of the breakpoint which is essentially a subroutine call or system call. The code jumped to has the breakpoint information, and does whatever printing or accepting of user commands, and also has the code that was overwritten with the patch, so that code can be executed to make the execution match the original code, without the breakpoint

Man, that's some heavy-duty wizardry. I knew debuggers were doing some deep things, but I didn't know it was that deep!
skiphoppy
Not necessarily a call even. Where possible, you overwrite with an instruction that raises an interrupt, which the debugger handles.
Steve Jessop
+4  A: 

To add to Nadreck's good answer:

There's an article here with more details, including some of the more exotic stuff (specific opcodes on x86; hardware breakpoints).

user9876
+2  A: 

Debuggers implement breakpoints in hardware or software. The latter requires saving the original instruction, inserting special code that generates an exception and when the exception is raised, reinserting the original instruction and letting the user know that the breakpoint has been hit. Read my article for the gory details on software breakpoints.

tc

related questions