Why is int typically 32 bit on 64 bit compilers?

user2341104 picture user2341104 · Jul 5, 2013 · Viewed 20.4k times · Source

Why is int typically 32 bit on 64 bit compilers? When I was starting programming, I've been taught int is typically the same width as the underlying architecture. And I agree that this also makes sense, I find it logical for a unspecified width integer to be as wide as the underlying platform (unless we are talking 8 or 16 bit machines, where such a small range for int will be barely applicable).

Later on I learned int is typically 32 bit on most 64 bit platforms. So I wonder what is the reason for this. For storing data I would prefer an explicitly specified width of the data type, so this leaves generic usage for int, which doesn't offer any performance advantages, at least on my system I have the same performance for 32 and 64 bit integers. So that leaves the binary memory footprint, which would be slightly reduced, although not by a lot...

Answer

James Kanze picture James Kanze · Jul 5, 2013

Bad choices on the part of the implementors?

Seriously, according to the standard, "Plain ints have the natural size suggested by the architecture of the execution environment", which does mean a 64 bit int on a 64 bit machine. One could easily argue that anything else is non-conformant. But in practice, the issues are more complex: switching from 32 bit int to 64 bit int would not allow most programs to handle large data sets or whatever (unlike the switch from 16 bits to 32); most programs are probably constrained by other considerations. And it would increase the size of the data sets, and thus reduce locality and slow the program down.

Finally (and probably most importantly), if int were 64 bits, short would have to be either 16 bits or 32 bits, and you'ld have no way of specifying the other (except with the typedefs in <stdint.h>, and the intent is that these should only be used in very exceptional circumstances). I suspect that this was the major motivation.