Let:
double d = 0.1;
float f = 0.1;
should the expression
(f > d)
return true
or false
?
Empirically, the answer is true
. However, I expected it to be false
.
As 0.1
cannot be perfectly represented in binary, while double has 15
to 16
decimal digits of precision, and float has only 7
. So, they both are less than 0.1
, while the double is more close to 0.1
.
I need an exact explanation for the true
.
I'd say the answer depends on the rounding mode when converting the double
to float
. float
has 24 binary bits of precision, and double
has 53. In binary, 0.1 is:
0.1₁₀ = 0.0001100110011001100110011001100110011001100110011…₂
^ ^ ^ ^
1 10 20 24
So if we round up at the 24th digit, we'll get
0.1₁₀ ~ 0.000110011001100110011001101
which is greater than the exact value and the more precise approximation at 53 digits.