Possible Duplicate:
long double vs double
I am new to programming and I am unable to understand the difference between between long double and double in C and C++. I tried to Google it but was unable to understand it and got confused. Can anyone please help.?
To quote the C++ standard, §3.9.1 ¶8:
There are three floating point types: float, double, and long double. The type double provides at least as much precision as float, and the type long double provides at least as much precision as double. The set of values of the type float is a subset of the set of values of the type double; the set of values of the type double is a subset of the set of values of the type long double. The value representation of floating-point types is implementation-defined. Integral and floating types are collectively called arithmetic types. Specializations of the standard template std::numeric_limits (18.3) shall specify the maximum and minimum values of each arithmetic type for an implementation.
That is to say that double
takes at least as much memory for its representation as float
and long double
at least as much as double
. That extra memory is used for more precise representation of a number.
On x86 systems, float
is typically 4 bytes long and can store numbers as large as about 3×10³⁸ and about as small as 1.4×10⁻⁴⁵. It is an IEEE 754 single-precision number that stores about 7 decimal digits of a fractional number.
Also on x86 systems, double
is 8 bytes long and can store numbers in the IEEE 754 double-precision format, which has a much larger range and stores numbers with more precision, about 15 decimal digits. On some other platforms, double
may not be 8 bytes long and may indeed be the same as a single-precision float
.
The standard only requires that long double
is at least as precise as double
, so some compilers will simply treat long double
as if it is the same as double
. But, on most x86 chips, the 10-byte extended precision format 80-bit number is available through the CPU's floating-point unit, which provides even more precision than 64-bit double
, with about 21 decimal digits of precision.
Some compilers instead support a 16-byte (128-bit) IEEE 754 quadruple precision number format with yet more precise representations and a larger range.