Java:Why should we use BigDecimal instead of Double in the real world?

Vinoth Kumar C M picture Vinoth Kumar C M · Jun 12, 2011 · Viewed 62.8k times · Source

When dealing with real world monetary values, I am advised to use BigDecimal instead of Double.But I have not got a convincing explanation except, "It is normally done that way".

Can you please throw light on this question?

Answer

It's called loss of precision and is very noticeable when working with either very big numbers or very small numbers. The binary representation of decimal numbers with a radix is in many cases an approximation and not an absolute value. To understand why you need to read up on floating number representation in binary. Here is a link: http://en.wikipedia.org/wiki/IEEE_754-2008. Here is a quick demonstration:
in bc (An arbitrary precision calculator language) with precision=10:

(1/3+1/12+1/8+1/15) = 0.6083333332
(1/3+1/12+1/8) = 0.541666666666666
(1/3+1/12) = 0.416666666666666

Java double:
0.6083333333333333
0.5416666666666666
0.41666666666666663

Java float:

0.60833335
0.5416667
0.4166667


If you are a bank and are responsible for thousands of transactions every day, even though they are not to and from one and same account (or maybe they are) you have to have reliable numbers. Binary floats are not reliable - not unless you understand how they work and their limitations.