I am reading a C book, talking about ranges of floating point, the author gave the table:
Type Smallest Positive Value Largest value Precision
==== ======================= ============= =========
float 1.17549 x 10^-38 3.40282 x 10^38 6 digits
double 2.22507 x 10^-308 1.79769 x 10^308 15 digits
I dont know where the numbers in the columns Smallest Positive and Largest Value come from.
A 32 bit floating point number has 23 + 1 bits of mantissa and an 8 bit exponent (-126 to 127 is used though) so the largest number you can represent is:
(1 + 1 / 2 + ... 1 / (2 ^ 23)) * (2 ^ 127) =
(2 ^ 23 + 2 ^ 23 + .... 1) * (2 ^ (127 - 23)) =
(2 ^ 24 - 1) * (2 ^ 104) ~= 3.4e38