My understanding of size_t
is that it will be large enough to hold any (integer) value which you might expect it to be required to hold. (Perhaps that is a poor explanation?)
For example, if you were using something like a for loop to iterate over all elements in a vector, size_t
would typically be 64 bits long (or at least on my system) in order that it can hold all possible return values from vector.size().
Or at least, I think that's correct?
Therefore, is there any reason to use A rather than B:
A: for(uint64_t i = 0; i < v.size(); ++ i)
B: for(size_t i = 0; i < v.size(); ++ i)
If I'm wrong with my explanation or you have a better explanation, please feel free to edit.
Edit: I should add that my understanding is that size_t
behaves like a normal unsigned integer - perhaps that is not correct?
size_t
is the return-type of sizeof
.
The standard says it's a typedef of some unsigned integer type, and large enough to hold the size of any possible object.
But it does not mandate whether it is smaller, bigger, or the same size as uint64_t
(a typedef for a fixed-width 64-bit unsigned integer), nor in the latter case whether it is the same type.
Thus, use size_t
where semantically correct.
Like for the size()
of a std::vector<T>
(std::vector
gets it's size_type
from the used allocator, std::allocator<T>
using size_t
).