I came across the data type int32_t
in a C program recently. I know that it stores 32 bits, but don't int
and int32
do the same?
Also, I want to use char
in a program. Can I use int8_t
instead? What is the difference?
To summarize: what is the difference between int32, int, int32_t, int8 and int8_t in C?
Between int32
and int32_t
, (and likewise between int8
and int8_t
) the difference is pretty simple: the C standard defines int8_t
and int32_t
, but does not define anything named int8
or int32
-- the latter (if they exist at all) is probably from some other header or library (most likely predates the addition of int8_t
and int32_t
in C99).
Plain int
is quite a bit different from the others. Where int8_t
and int32_t
each have a specified size, int
can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).
On the other hand, int
is guaranteed to be present in every implementation of C, where int8_t
and int32_t
are not. It's probably open to question whether this matters to you though. If you use C on small embedded systems and/or older compilers, it may be a problem. If you use it primarily with a modern compiler on desktop/server machines, it probably won't be.
Oops -- missed the part about char
. You'd use int8_t
instead of char if (and only if) you want an integer type guaranteed to be exactly 8 bits in size. If you want to store characters, you probably want to use char
instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte. One slight oddity though: there's no guarantee about whether a plain char
is signed or unsigned (and many compilers can make it either one, depending on a compile-time flag). If you need to ensure its being either signed or unsigned, you need to specify that explicitly.