Can anyone please explain to me what's happening here:
using System;
using System.Text;
namespace ConsoleApplication1 {
class Program {
static void Main(string[] args) {
object o = 1000000.123f;
float f= Convert.ToSingle(o);
double d = Convert.ToDouble(f);
Console.WriteLine(f.ToString("r"));
Console.WriteLine(d.ToString("r"));
Console.ReadLine();
}
}
}
Which outputs:
1000000.13
1000000.125
I expected:
The object o to have an underlying float type (seems to happen [from observing the watch window where it is typed as object {float})
That 1000000.123f would get stored in f as 1000000.125 (The IEEE754 approximation in 32 bits?)
That the double would store 1000000.125 as well (seems to happen even though f doesn't seem to contain what I expected)
That asking for a round trip format on the ToString would give me back 1000000.125 in both cases.
Can anyone tell me what I'm doing wrong to get 1000000.13 when stringing out f?
As you have already observed, the number 1000000.123 is stored as 1000000.125. This is rendered as-is by double.ToString()
, but truncated by float.ToString()
because showing too many digits is misleading.
Incidentally, there is no Convert.ToSingle(float)
because it would simply return exactly what you passed in. Your code is actually resolving to Convert.ToSingle(double)
. You are thus (implicitly) converting to double
and then (explicitly) back to float
, which is a no-op, essentially.
Caution: Don't trust JavaScript floating point calculators. Some of them assert that 1000000.123 is stored as 1000000.1 by single-precision floats, which I'm guessing is based on the assumption that, because IEEE floats have roughly 7.22 digits of precision, they can be accurately represented in 8 digits. This is incorrect.