I just recently ran across the constants in the primitive type wrapper classes like Double.POSITIVE_INFINITY
and Double.NEGATIVE_INFINITY
. In the API, it defines the first as:
A constant holding the positive infinity of type double. It is equal to the value returned by Double.longBitsToDouble(0x7ff0000000000000L).
The others have definitions along these same lines.
What I'm having trouble with is understanding what these constants actually are. They can't actually be or represent positive/negative infinities, because the system is by nature finite. Is it just some arbitrary setting of bits which the Java creators deemed would define the concept of infinity? Or do these actually have some kind of special value? If it is just an arbitrary string of bits interpreted as a double
, then is there some normal number out there that, when interpreted as a double
will return POSITIVE_INFINITY
instead of whatever value is actually expected?
Forgive me if the answer to this is obvious given the Double.longBitsToDouble(0x7ff0000000000000L)
part of the API. Truthfully, that description is pretty arcane to me and I won't pretend to understand what the hexadecimal values actually mean or represent.
Java floating point is based on the IEEE 754 binary floating point standard Floating Point Standard, the first version of which was issued in about 1985, so it is much older than Java. Given widespread hardware implementation of IEEE 754 by the time Java was being defined, the Java creators had little choice.
Each IEEE 754 floating point number has three components, a sign bit, an exponent, and a mantissa. Simplifying considerably, the magnitude of a normal number is:
mantissa * (2 ** exponent)
where "**" represents power.
The leading bit is the sign bit. In doubles, the next 11 bits are the exponent.
The bit patterns with all the exponent bits on are reserved for infinities and NaNs. All normal numbers have at least one zero bit in the exponent. The two infinities are represented by having all exponent bits on, and all mantissa bits zero. The leading sign bit distinguishes positive and negative infinity.
The choice of all exponent bits one for the special cases is not arbitrary. It is easier to chop off one of the extremes than to deal with a gap in the middle of a range of numbers, especially for hardware implementations. Taking the all bits off exponent for special cases would have prevented encoding zero with the all bits off pattern, and would have given the largest absolute magnitude values, the infinities, the smallest exponent, which would have also made hardware more complicated. The all bits on exponent is definitely the best choice for the infinities.
The two infinities are both used to represent two things, actually infinite results and results that are too large in absolute magnitude to represent in the normal number system, numbers larger than Double.MAX_VALUE or smaller than -Double.MAX_VALUE. 1.0/0.0 is infinite. So is 2*Double.MAX_VALUE.
There are some algorithms that can be simplified, with fewer special cases, by allowing intermediate results to be infinite, in either sense. Doing so also allows e.g. even a line parallel to the y-axis to have a storable gradient that can be used in calculations.