So I know the IEEE 754 specifies some special floating point values for values that are not real numbers. In Java, casting those values to a primitive int
does not throw an exception like I would have expected. Instead we have the following:
int n;
n = (int)Double.NaN; // n == 0
n = (int)Double.POSITIVE_INFINITY; // n == Integer.MAX_VALUE
n = (int)Double.NEGATIVE_INFINITY; // n == Integer.MIN_VALUE
What is the rationale for not throwing exceptions in these cases? Is this an IEEE standard, or was it merely a choice by the designers of Java? Are there bad consequences that I am unaware of if exceptions would be possible with such casts?
What is the rationale for not throwing exceptions in these cases?
I imagine that the reasons include:
These are edge cases, and are likely to occur rarely in applications that do this kind of thing.
The behavior is not "totally unexpected".
When an application casts from a double to an int, significant loss of information is expected. The application is either going to ignore this possibility, or the cast will be preceded by checks to guard against it ... which could also check for these cases.
No other double / float operations result in exceptions, and (IMO) it would be a bit schizophrenic to do it in this case.
There could possibly be a performance hit ... on some hardware platforms (current or future).
A commentator said this:
"I suspect the decision to not have the conversion throw an exception was motivated by a strong desire to avoid throwing exceptions for any reasons, for fear of forcing code to add it to a throws clause."
I don't think that is a plausible explanation:
The Java language designers1 don't have a mindset of avoiding throwing exceptions "for any reason". There are numerous examples in the Java APIs that demonstrate this.
The issue of the throws
clause is addressed by making the exception unchecked. Indeed, many related exceptions like ArithmeticException
or ClassCastException
are declared as unchecked for this reason.
Is this an IEEE standard, or was it merely a choice by the designers of Java?
The latter, I think.
Are there bad consequences that I am unaware of if exceptions would be possible with such casts?
None apart from the obvious ones ...
(But it is not really relevant. The JLS and JVM spec say what they say, and changing them would be liable to break existing code. And it is not just Java code we are talking about now ...)
I've done a bit of digging. A lot of the x86 instructions that could be used convert from double to integers seem to generate hardware interrupts ... unless masked. It is not clear (to me) whether the specified Java behavior is easier or harder to implement than the alternative suggested by the OP.
1 - I don't dispute that some Java programmers do think this way. But they were / are not the Java designers, and this question is asking specifically about the Java design rationale.