Here's what I want to do:
const int64_t randomIntNumber = reinterpret_cast<int64_t> (randomUintNumber);
Where randomUintNumber is of type uint64_t
.
The error is (MSVC 2010):
error C2440: 'reinterpret_cast' : cannot convert from 'const uint64_t' to 'int64_t' 1> Conversion is a valid standard conversion, which can be performed implicitly or by use of static_cast, C-style cast or function-style cast
Why doesn't it compile? both types have the same bit length, isn't it what reinterpret_cast is intended for?
Because that's not what reinterpret_cast
is for. All the permitted conversions with reinterpret_cast
involve pointers or references, with the exception that an integer or enum type can be reinterpret_cast
to itself. This is all defined in the standard, [expr.reinterpret.cast]
.
I'm not certain what you're trying to achieve here, but if you want randomIntNumber
to have the same value as randomUintNumber
, then do
const int64_t randomIntNumber = randomUintNumber;
If that results in a compiler warning, or if you just want to be more explicit, then:
const int64_t randomIntNumber = static_cast<int64_t>(randomUintNumber);
The result of the cast has the same value as the input if randomUintNumber
is less than 263. Otherwise the result is implementation-defined, but I expect all known implementations that have int64_t
will define it to do the obvious thing: the result is equivalent to the input modulo 264.
If you want randomIntNumber
to have the same bit-pattern as randomUintNumber
, then you can do this:
int64_t tmp;
std::memcpy(&tmp, &randomUintNumber, sizeof(tmp));
const int64_t randomIntNumber = tmp;
Since int64_t
is guaranteed to use two's complement representation, you would hope that the implementation defines static_cast
to have the same result as this for out-of-range values of uint64_t
. But it's not actually guaranteed in the standard AFAIK.
Even if randomUintNumber
is a compile-time constant, unfortunately here randomIntNumber
is not a compile-time constant. But then, how "random" is a compile-time constant? ;-)
If you need to work around that, and you don't trust the implementation to be sensible about converting out-of-range unsigned values to signed types, then something like this:
const int64_t randomIntNumber =
randomUintNumber <= INT64_MAX ?
(int64_t) randomUintNumber :
(int64_t) (randomUintNumber - INT64_MAX - 1) + INT64_MIN;
Now, I'm in favour of writing truly portable code where possible, but even so I think this verges on paranoia.
Btw, you might be tempted to write this:
const int64_t randomIntNumber = reinterpret_cast<int64_t&>(randomUintNumber);
or equivalently:
const int64_t randomIntNumber = *reinterpret_cast<int64_t*>(&randomUintNumber);
This isn't quite guaranteed to work, because although where they exist int64_t
and uint64_t
are guaranteed to be a signed type and an unsigned type of the same size, they aren't actually guaranteed to be the signed and unsigned versions of a standard integer type. So it is implementation-specific whether or not this code violates strict aliasing. Code that violates strict aliasing has undefined behavior. The following does not violate strict aliasing, and is OK provided that the bit pattern in randomUintNumber
is a valid representation of a value of long long
:
unsigned long long x = 0;
const long long y = reinterpret_cast<long long &>(x);
So on implementations where int64_t
and uint64_t
are typedefs for long long
and unsigned long long
, then my reinterpret_cast
is OK. And as with the implementation-defined conversion of out-of-range values to signed types, you would expect that the sensible thing for implementations to do is to make them corresponding signed/unsigned types. So like the static_cast
and the implicit conversion, you expect it to work in any sensible implementation but it is not actually guaranteed.