int length = (int) floor( log10 (float) number ) + 1;
My question is essentially a math question: WHY does taking the log10() of a number, flooring that number, adding 1, and then casting it into an int correctly calculate the length of number?
I really want to know the deep mathematical explanation please!
For an integer number
that has n
digits, it's value is between 10^(n - 1)
(included) and 10^n
, and so log10(number)
is between n - 1
(included) and n
. Then the function floor
cuts down the fractional part, leaves the result as n - 1
. Finally, adding 1
to it gives the number of digits.