I'm using BigDecimal for my numbers in my application, for example, with JPA. I did a bit of researching about the terms 'precision' and 'scale' but I don't understand what are they exactly.
Can anyone explain me the meaning of 'precision' and 'scale' for a BigDecimal value?
@Column(precision = 11, scale = 2)
Thanks!
A BigDecimal
is defined by two values: an arbitrary precision integer and a 32-bit integer scale. The value of the BigDecimal
is defined to be .
Precision:
The precision is the number of digits in the unscaled value. For instance, for the number 123.45, the precision returned is 5.
So, precision indicates the length of the arbitrary precision integer. Here are a few examples of numbers with the same scale, but different precision:
In the special case that the number is equal to zero (i.e. 0.000), the precision is always 1.
Scale:
If zero or positive, the scale is the number of digits to the right of the decimal point. If negative, the unscaled value of the number is multiplied by ten to the power of the negation of the scale. For example, a scale of -3 means the unscaled value is multiplied by 1000.
This means that the integer value of the ‘BigDecimal’ is multiplied by .
Here are a few examples of the same precision, with different scales:
BigDecimal.toString:
The toString
method for a BigDecimal
behaves differently based on the scale and precision
. (Thanks to @RudyVelthuis for pointing this out.)
scale == 0
, the integer is just printed out, as-is. scale < 0
, E-Notation is always used (e.g. 5 scale -1 produces "5E+1")scale >= 0
and precision - scale -1 >= -6
a plain decimal number is produced (e.g. 10000000 scale 1 produces "1000000.0")precision - scale -1
equals is less than -6. More examples: