Discussion:
'int3' Question
(too old to reply)
Günter Prossliner
2010-05-19 16:24:06 UTC
Permalink
Hello NG,

according to various articles and books, Breakpoints are implemented on x86
by the 'int3' OpCode, which issues the Software-Interrupt #3, which is
handled by the OS and dispatched to the Debugger, which than:

(all threads of the Debuggee are stopped here)

* restores the original Instruction
* performs Debugger Tasks (like diplay Code or Context)

If the User than continues with the execution, the CPU runs resumes at the
location of the 'int3' instruction. Because the original Instruction has
been restored, the Program will continue.

So far, so good.

But: When is the Breakpoint restored??? I don't want to break just once. If
the Breakpoint is places on a NOP it would be no problem, since the Debugger
can resume after EIP++.

But if the Breakpoint is set on a "real" instruction, the Debugger would
have to reset the Breakpoint *after Execution has continued*, but then there
would be a chance to miss a Breakpoint. I can't see how this can be
syncronized.

The Debugger may switch to single-step-mode to restore the Breakpoint as
soon as possible...

Anybody out there who has information about this? It's hard to analyse,
because Debuggers do their best to hide this from the User.


GP
Don Burn
2010-05-19 16:36:00 UTC
Permalink
No debuggers are smart enough to handle this, they either use a single
step mode or setting a second break point one instruction later to
restore the breakpoint after the instruction has been executed. On some
old mini-computers and mainframes there were special instructions that
could execute a single instruction that was pointed to by a register to
help this. This problem was solved over 40 years ago, don't worry
about it.


Don Burn (MVP, Windows DKD)
Windows Filesystem and Driver Consulting
Website: http://www.windrvr.com
Blog: http://msmvps.com/blogs/WinDrvr
-----Original Message-----
Posted At: Wednesday, May 19, 2010 12:24 PM
Posted To: microsoft.public.win32.programmer.kernel
Conversation: 'int3' Question
Subject: 'int3' Question
Hello NG,
according to various articles and books, Breakpoints are implemented
on x86 by
the 'int3' OpCode, which issues the Software-Interrupt #3, which is
handled by
(all threads of the Debuggee are stopped here)
* restores the original Instruction
* performs Debugger Tasks (like diplay Code or Context)
If the User than continues with the execution, the CPU runs resumes at the
location of the 'int3' instruction. Because the original Instruction
has been
restored, the Program will continue.
So far, so good.
But: When is the Breakpoint restored??? I don't want to break just once. If
the Breakpoint is places on a NOP it would be no problem, since the Debugger
can resume after EIP++.
But if the Breakpoint is set on a "real" instruction, the Debugger
would have
to reset the Breakpoint *after Execution has continued*, but then
there would
be a chance to miss a Breakpoint. I can't see how this can be
syncronized.
The Debugger may switch to single-step-mode to restore the Breakpoint
as soon
as possible...
Anybody out there who has information about this? It's hard to
analyse,
because Debuggers do their best to hide this from the User.
GP
__________ Information from ESET Smart Security, version of virus
signature
database 5128 (20100519) __________
The message was checked by ESET Smart Security.
http://www.eset.com
Günter Prossliner
2010-05-20 11:13:47 UTC
Permalink
Hello Don!
Post by Don Burn
No debuggers are smart enough to handle this, they either use a single
step mode or setting a second break point one instruction later to
restore the breakpoint after the instruction has been executed. On
some old mini-computers and mainframes there were special
instructions that could execute a single instruction that was pointed
to by a register to help this. This problem was solved over 40
years ago, don't worry about it.
Thank you very much for your comment which matches my assumption!

Not that I have any problem with this, it was just a "missing link" in my
brain.


GP
Le Chaud Lapin
2010-05-24 17:39:08 UTC
Permalink
Post by Günter Prossliner
Hello Don!
Post by Don Burn
No debuggers are smart enough to handle this, they either use a single
step mode or setting a second break point one instruction later to
restore the breakpoint after the instruction has been executed.  On
some old mini-computers and mainframes there were special
instructions that could execute a single instruction that was pointed
to by a register to help this.   This problem was solved over 40
years ago, don't worry about it.
Thank you very much for your comment which matches my assumption!
Not that I have any problem with this, it was just a "missing link" in my
brain.
Not sure if I understand the OP, but if he is asking if the debugger
is smart enough to figure out the boundaries of instructions, the
answer is "yes".

The programmer sets a breakpoint on a "line", and debugger finds the
byte location into executable to put the INT 3 instruction. Restarting
always occurs at location of the INT 3 instruction, after restoring
whatever instruction was placed at the INT 3.

When the "line" of code is high-level source code, like C/C++, the
instruction boundary of the next "line" is easily determined from an
associative set of data in the debugger code. Then, it is only
necessary to put INT 3 instructions on the lines for which the
programmer has indicated.

When the "line" of code is low-level assembly language, what the
programmer sees on the screen is generated by real-time disassembly of
machine code by the debugger. This is why it is possible to debug
EXE's that have no debugging information in them at all.

Note that, in both cases, it is not necessary to have a second INT 3
to allow the restoration of the instruction replaced by the first INT
3. One merely places the INT 3's on all applicable lines, , let the
thread run, wait for exceptions, do whatever, restore each instruction
as INT 3's are hit, restart thread, etc.

The key is to know the boundaries of instructions, which is readily
determined from disassembly or context.

-Le Chaud Lapin-
m
2010-05-24 21:42:07 UTC
Permalink
Different debuggers use different techniques depending on their design and
the capabilities of the architecture they target. The method you describe
is consistent with a simple source mode debugger. One significant
limitation of this method is the possibility of missing breaks because a
thread executed the instruction while is was restored for another thread.

But as Don has said, unless you are writing a new debugger, this problem
should not keep you up at night
Post by Le Chaud Lapin
Post by Günter Prossliner
Hello Don!
Post by Don Burn
No debuggers are smart enough to handle this, they either use a single
step mode or setting a second break point one instruction later to
restore the breakpoint after the instruction has been executed. On
some old mini-computers and mainframes there were special
instructions that could execute a single instruction that was pointed
to by a register to help this. This problem was solved over 40
years ago, don't worry about it.
Thank you very much for your comment which matches my assumption!
Not that I have any problem with this, it was just a "missing link" in my
brain.
Not sure if I understand the OP, but if he is asking if the debugger
is smart enough to figure out the boundaries of instructions, the
answer is "yes".
The programmer sets a breakpoint on a "line", and debugger finds the
byte location into executable to put the INT 3 instruction. Restarting
always occurs at location of the INT 3 instruction, after restoring
whatever instruction was placed at the INT 3.
When the "line" of code is high-level source code, like C/C++, the
instruction boundary of the next "line" is easily determined from an
associative set of data in the debugger code. Then, it is only
necessary to put INT 3 instructions on the lines for which the
programmer has indicated.
When the "line" of code is low-level assembly language, what the
programmer sees on the screen is generated by real-time disassembly of
machine code by the debugger. This is why it is possible to debug
EXE's that have no debugging information in them at all.
Note that, in both cases, it is not necessary to have a second INT 3
to allow the restoration of the instruction replaced by the first INT
3. One merely places the INT 3's on all applicable lines, , let the
thread run, wait for exceptions, do whatever, restore each instruction
as INT 3's are hit, restart thread, etc.
The key is to know the boundaries of instructions, which is readily
determined from disassembly or context.
-Le Chaud Lapin-
Le Chaud Lapin
2010-05-26 05:16:01 UTC
Permalink
Post by m
Different debuggers use different techniques depending on their design and
the capabilities of the architecture they target.  The method you describe
is consistent with a simple source mode debugger.  One significant
limitation of this method is the possibility of missing breaks because a
thread executed the instruction while is was restored for another thread.
But as Don has said, unless you are writing a new debugger, this problem
should not keep you up at night
Ah...I just re-read the OP. I completely missed the point.

To the OP:

I guess you know by now that you can avoid the sync problem on IA-??
by using trap flag (TF) in EFLAGS register. Single-step only the
thread that INT 3'ed, restore byte, let all threads run.

-Le Chaud Lapin-
Günter Prossliner
2010-05-26 09:11:22 UTC
Permalink
Hello NG!
One significant limitation of this method is the possibility of
missing breaks because a thread executed the instruction while is was
restored for another thread.
This synchronization issue was the point of my original post.

One possible implementation would be to change thread affinities so that
only the thread that hit the breakpoint will run in the process, and enable
Single-Step-Mode, resume the thread, wait for EXCEPTION_SINGLE_STEP, restore
the breakpoint and thread affinities. So there will be no possiblity of
missing a break.


GP

Continue reading on narkive:
Loading...