I know the 0.1
decimal number cannot be represented exactly with a finite binary number (explanation), so double n = 0.1
will lose some precision and will not be exactly 0.1
. On the other hand 0.5
can be represented exactly because it is 0.5 = 1/2 = 0.1b
.
Having said that it is understandable that adding 0.1
three times will not give exactly 0.3
so the following code prints false
:
double sum = 0, d = 0.1;
for (int i = 0; i < 3; i++)
sum += d;
System.out.println(sum == 0.3); // Prints false, OK
But then how is it that adding 0.1
five times will give exactly 0.5
? The following code prints true
:
double sum = 0, d = 0.1;
for (int i = 0; i < 5; i++)
sum += d;
System.out.println(sum == 0.5); // Prints true, WHY?
If 0.1
cannot be represented exactly, how is it that adding it 5 times gives exactly 0.5
which can be represented precisely?
The rounding error is not random and the way it is implemented it attempts to minimise the error. This means that sometimes the error is not visible, or there is not error.
For example 0.1
is not exactly 0.1
i.e. new BigDecimal("0.1") < new BigDecimal(0.1)
but 0.5
is exactly 1.0/2
This program shows you the true values involved.
BigDecimal _0_1 = new BigDecimal(0.1);
BigDecimal x = _0_1;
for(int i = 1; i <= 10; i ++) {
System.out.println(i+" x 0.1 is "+x+", as double "+x.doubleValue());
x = x.add(_0_1);
}
prints
0.1000000000000000055511151231257827021181583404541015625, as double 0.1
0.2000000000000000111022302462515654042363166809082031250, as double 0.2
0.3000000000000000166533453693773481063544750213623046875, as double 0.30000000000000004
0.4000000000000000222044604925031308084726333618164062500, as double 0.4
0.5000000000000000277555756156289135105907917022705078125, as double 0.5
0.6000000000000000333066907387546962127089500427246093750, as double 0.6000000000000001
0.7000000000000000388578058618804789148271083831787109375, as double 0.7000000000000001
0.8000000000000000444089209850062616169452667236328125000, as double 0.8
0.9000000000000000499600361081320443190634250640869140625, as double 0.9
1.0000000000000000555111512312578270211815834045410156250, as double 1.0
Note: that 0.3
is slightly off, but when you get to 0.4
the bits have to shift down one to fit into the 53-bit limit and the error is discarded. Again, an error creeps back in for 0.6
and 0.7
but for 0.8
to 1.0
the error is discarded.
Adding it 5 times should cumulate the error, not cancel it.
The reason there is an error is due to limited precision. i.e 53-bits. This means that as the number uses more bits as it get larger, bits have to be dropped off the end. This causes rounding which in this case is in your favour.
You can get the opposite effect when getting a smaller number e.g. 0.1-0.0999
=> 1.0000000000000286E-4
and you see more error than before.
An example of this is why in Java 6 Why does Math.round(0.49999999999999994) return 1 In this case the loss of a bit in calculation results in a big difference to the answer.