I was looking through the disassmbly of my program (because it crashed), and noticed lots of
xchg ax, ax
I googled it and found out it's essentially a nop, but why does visual studio do an xchg instead of a noop?
The application is a C# .NET3.5 64-bit application, compiled by visual studio
On x86 the NOP
instruction is XCHG AX, AX
The 2 mnemonic instructions assemble to the same binary op-code. (Actually, I suppose an assembler could use any xchg
of a register with itself, but AX
or EAX
is what's typically used for the nop
as far as I know).
xchg ax, ax
has the properties of changing no register values and changing no flags (hey - it's a no op!).
Edit (in response to a comment by Anon.):
Oh right - now I remember there are several encodings for the xchg
instruction. Some take a mod/r/m set of bits (like many Intel x86 architecture instructions) that specify a source and destination. Those encodings take more than one byte. There's also a special encoding that uses a single byte and exchanges a general purpose register with (E)AX
. If the specified register is also (E)AX
then you have a single-byte NOP instruction. you can also specify that (E)AX
be exchanged with itself using the larger variant of the xchg
instruction.
I'm guessing that MSVC uses the multiple byte version of xchg
with (E)AX
as the source and destination when it wants to chew up more than one byte for no operation - it takes the same number of cycles as the single byte xchg
, but uses more space. In the disassembly you won't see the multiple byte xchg
decoded as a NOP
, even if the result is the same.
Specifically xchg eax, eax
or nop
could be encoded as opcodes 0x90
or 0x87 0xc0
depending on whether you want it to use up 1 or 2 bytes. The Visual Studio disassembler (and probably others) will decode the opcode 0x90
as the NOP
instruction and will decode opcode 0x87 0xc0
as xchg eax, eax
.
It's been a while since I've done detailed assembly language work, so chances are I'm wrong on at least one count here...