I ran some simple tests on abs() and fabs() functions and I don't understand what are the advantages of using fabs(), if it is:
1) slower
2) works only on floats
3) will throw an exception if used on a different type
In [1]: %timeit abs(5)
10000000 loops, best of 3: 86.5 ns per loop
In [3]: %timeit fabs(5)
10000000 loops, best of 3: 115 ns per loop
In [4]: %timeit abs(-5)
10000000 loops, best of 3: 88.3 ns per loop
In [5]: %timeit fabs(-5)
10000000 loops, best of 3: 114 ns per loop
In [6]: %timeit abs(5.0)
10000000 loops, best of 3: 92.5 ns per loop
In [7]: %timeit fabs(5.0)
10000000 loops, best of 3: 93.2 ns per loop
it's even slower on floats!
From where I am standing the only advantage of using fabs() is to make your code more readable, because by using it, you are clearly stating your intention of working with float/double point values
Is there any other use of fabs()?
From an email response from Tim Peters:
Why does math have an fabs function? Both it and the abs builtin function wind up calling fabs() for floats. abs() is faster to boot.
Nothing deep -- the math module supplies everything in C89's standard libm (+ a few extensions), fabs() is a std C89 libm function.
There isn't a clear (to me) reason why one would be faster than the other; sounds accidental; math.fabs() could certainly be made faster (as currently implemented (via math_1), it endures a pile of general-purpose "try to guess whether libm should have set errno" boilerplate that's wasted (there are no domain or range errors possible for fabs())).
It seems there is no advantageous reason to use fabs
. Just use abs
for virtually all purposes.