I've been reading the C Primer Plus book and got to this example
#include <stdio.h>
int main(void)
{
float aboat = 32000.0;
double abet = 2.14e9;
long double dip = 5.32e-5;
printf("%f can be written %e\n", aboat, aboat);
printf("%f can be written %e\n", abet, abet);
printf("%f can be written %e\n", dip, dip);
return 0;
}
After I ran this on my macbook I was quite shocked at the output:
32000.000000 can be written 3.200000e+04
2140000000.000000 can be written 2.140000e+09
2140000000.000000 can be written 2.140000e+09
So I looked round and found out that the correct format to display long double is to use %Lf
. However I still can't understand why I got the double abet
value instead of what I got when I ran it on Cygwin, Ubuntu and iDeneb which is roughly
-1950228512509697486020297654959439872418023994430148306244153100897726713609
013030397828640261329800797420159101801613476402327600937901161313172717568.0
00000 can be written 2.725000e+02
Any ideas?
Try looking at the varargs calling convention on OSX, that might explain it.
I'm guessing the compiler passes the first long double
parameter on the stack (or in an FPU register), and the first double
parameter in CPU registers (or on the stack). Either way, they're passed in different places. So when the third call is made, the value from the second call is still lying around (and the callee picks it up). But that is just a guess.