Upon running info registers
in gdb, we get an output similar to the following:
rax 0x1c 28
rbx 0x0 0
rcx 0x400a60 4196960
rdx 0x7fffffffde88 140737488346760
rsi 0x1 1
rdi 0x400932 4196658
rbp 0x0 0x0
rsp 0x7fffffffde68 0x7fffffffde68
r8 0x400ad0 4197072
r9 0x7ffff7dea560 140737351951712
r10 0x7fffffffdc30 140737488346160
r11 0x7ffff7732dd0 140737344908752
r12 0x4007f0 4196336
r13 0x7fffffffde80 140737488346752
r14 0x0 0
r15 0x0 0
rip 0x7ffff7732dd0 0x7ffff7732dd0
eflags 0x202 [ IF ]
cs 0x33 51
ss 0x2b 43
ds 0x0 0
es 0x0 0
fs 0x0 0
gs 0x0 0
While I do understand the for rax
, rcx
etc, GDB is converting the value to decimal for the second column, this doesn't seem consistent. Some registers, namely rsp
and rip
show the same value in hex, even in the second column. eflags
on the other hand shows the flags in the second column.
What is the reason that gdb
does this? If it is going to show the same info (in case of rsp
and rip
), isn't it redundant? Also, how does this generalize on other architectures? (The above output is for x86-64).
The info registers command prints out registers in both raw format (hex) and natural format.
The natural format is based on the type of the register, declared in xml files in gdb's source code. For example, i386/64bit-core.xml contains:
<reg name="rax" bitsize="64" type="int64"/>
<reg name="rbx" bitsize="64" type="int64"/>
<reg name="rcx" bitsize="64" type="int64"/>
<reg name="rdx" bitsize="64" type="int64"/>
<reg name="rsi" bitsize="64" type="int64"/>
<reg name="rdi" bitsize="64" type="int64"/>
<reg name="rbp" bitsize="64" type="data_ptr"/>
<reg name="rsp" bitsize="64" type="data_ptr"/>
<reg name="r8" bitsize="64" type="int64"/>
<reg name="r9" bitsize="64" type="int64"/>
<reg name="r10" bitsize="64" type="int64"/>
<reg name="r11" bitsize="64" type="int64"/>
<reg name="r12" bitsize="64" type="int64"/>
<reg name="r13" bitsize="64" type="int64"/>
<reg name="r14" bitsize="64" type="int64"/>
<reg name="r15" bitsize="64" type="int64"/>
<reg name="rip" bitsize="64" type="code_ptr"/>
<reg name="eflags" bitsize="32" type="i386_eflags"/>
<reg name="cs" bitsize="32" type="int32"/>
<reg name="ss" bitsize="32" type="int32"/>
<reg name="ds" bitsize="32" type="int32"/>
<reg name="es" bitsize="32" type="int32"/>
<reg name="fs" bitsize="32" type="int32"/>
<reg name="gs" bitsize="32" type="int32"/>
You can see that the registers with type="int64"
and type="int32"
are displayed as decimal values in their natural output, since they are normally general purpose register and can be used for both referencing memory and assigning value.
While registers with type="data_ptr"
and type="code_ptr"
have hexadecimal values in their natural format, since they are normally used for referencing memory address.
For registers with type="i386_eflags"
outputs the flag that are set 'true', since for this register, for humans it makes more sense when knowing which flag are set 'True' and not the hex values.
For other architecture, it depends on the how the register types are defined in their source code. You can look at the source code of ARM, ARM-64, x86-32bit and many other at binutils-gdb/gdb/features/
EDIT:
Source: @MarkPlotnick answer at Why is "info register ebp" in gdb not displaying a decimal number? and @perror answer at https://reverseengineering.stackexchange.com/questions/9221/output-of-gdb-info-registers/9222#9222.
Sorry I forgot to mention the source.