sizeof long double and precision not matching?

Vincent picture Vincent · Jun 29, 2013 · Viewed 9.9k times · Source

Consider the following C code:

#include <stdio.h>
int main(int argc, char* argv[]) 
{
    const long double ld = 0.12345678901234567890123456789012345L;
    printf("%lu %.36Lf\n", sizeof(ld), ld);
    return 0;
}

Compiled with gcc 4.8.1 under Ubuntu x64 13.04, it prints:

16 0.123456789012345678901321800735590983

Which tells me that a long double weights 16 bytes but the decimals seems to be ok only to the 20th place. How is it possible? 16 bytes corresponds to a quad, and a quad would give me between 33 and 36 decimals.

Answer

Eric Postpischil picture Eric Postpischil · Jun 29, 2013

The long double format in your C implementation uses an Intel format with a one-bit sign, a 15-bit exponent, and a 64-bit significand (ten bytes total). The compiler allocates 16 bytes for it, which is wasteful but useful for some things such as alignment. However, the 64 bits provide only log10(264) digits of significance, which is about 20 digits.