Sometimes I see Integer constants defined in hexadecimal, instead of decimal numbers. This is a small part I took from a GL10 class:
public static final int GL_STACK_UNDERFLOW = 0x0504;
public static final int GL_OUT_OF_MEMORY = 0x0505;
public static final int GL_EXP = 0x0800;
public static final int GL_EXP2 = 0x0801;
public static final int GL_FOG_DENSITY = 0x0B62;
public static final int GL_FOG_START = 0x0B63;
public static final int GL_FOG_END = 0x0B64;
public static final int GL_FOG_MODE = 0x0B65;
It's obviously simpler to define 2914
instead of 0x0B62
, so is there maybe some performance gain? I acutallly don't think so, since then it should be the compiler's job to change it.
It is likely for organizational and visual cleanliness. Base 16 has a much simpler relationship to binary than base 10, because in base 16 each digit corresponds to exactly four bits.
Notice how in the above, the constants are grouped with many digits in common. If they were represented in decimal, bits in common would be less clear. If they instead had decimal digits in common, the bit patterns would not have the same degree of similarity.
Also, in many situations it is desired to be able to bitwise-OR constants together to create a combination of flags. If the value of each constant is constrained to only have a subset of the bits non-zero, then this can be done in a way that can be re-separated. Using hex constants makes it clear which bits are non-zero in each value.
There are two other reasonable possibilities: octal, or base 8 simply encodes 3 bits per digit. And then there is binary coded decimal, in which each digit requires four bits, but digit values above 9 are prohibited - that would be disadvantageous as it cannot represent all of the possibilities which binary can.