This is not a question about how to compare two BigDecimal
objects - I know that you can use compareTo
instead of equals
to do that, since equals
is documented as:
Unlike compareTo, this method considers two BigDecimal objects equal only if they are equal in value and scale (thus 2.0 is not equal to 2.00 when compared by this method).
The question is: why has the equals
been specified in this seemingly counter-intuitive manner? That is, why is it important to be able to distinguish between 2.0 and 2.00?
It seems likely that there must be a reason for this, since the Comparable
documentation, which specifies the compareTo
method, states:
It is strongly recommended (though not required) that natural orderings be consistent with equals
I imagine there must be a good reason for ignoring this recommendation.
Because in some situations, an indication of precision (i.e. the margin of error) may be important.
For example, if you're storing measurements made by two physical sensors, perhaps one is 10x more precise than the other. It may be important to represent this fact.