2010-05-19 16:24:06 UTC
according to various articles and books, Breakpoints are implemented on x86
by the 'int3' OpCode, which issues the Software-Interrupt #3, which is
handled by the OS and dispatched to the Debugger, which than:
(all threads of the Debuggee are stopped here)
* restores the original Instruction
* performs Debugger Tasks (like diplay Code or Context)
If the User than continues with the execution, the CPU runs resumes at the
location of the 'int3' instruction. Because the original Instruction has
been restored, the Program will continue.
So far, so good.
But: When is the Breakpoint restored??? I don't want to break just once. If
the Breakpoint is places on a NOP it would be no problem, since the Debugger
can resume after EIP++.
But if the Breakpoint is set on a "real" instruction, the Debugger would
have to reset the Breakpoint *after Execution has continued*, but then there
would be a chance to miss a Breakpoint. I can't see how this can be
The Debugger may switch to single-step-mode to restore the Breakpoint as
soon as possible...
Anybody out there who has information about this? It's hard to analyse,
because Debuggers do their best to hide this from the User.