To check for odd and even integer, is the lowest bit checking more efficient than using the modulo?
>>> def isodd(num):
return num & 1 and True or False
>>> isodd(10)
False
>>> isodd(9)
True
Yep. The timeit
module in the standard library is how you check on those things. E.g:
$ python -m timeit -s 'def isodd(x): x & 1' 'isodd(9)'
1000000 loops, best of 3: 0.446 usec per loop
$ python -m timeit -s 'def isodd(x): x & 1' 'isodd(10)'
1000000 loops, best of 3: 0.443 usec per loop
$ python -m timeit -s 'def isodd(x): x % 2' 'isodd(9)'
1000000 loops, best of 3: 0.461 usec per loop
$ python -m timeit -s 'def isodd(x): x % 2' 'isodd(10)'
1000000 loops, best of 3: 0.453 usec per loop
As you see, on my (first-day==old==slow;-) Macbook Air, the &
solution is repeatably between 7 and 18 nanoseconds faster than the %
solution.
timeit
not only tells you what's faster, but by how much (just run the tests a few times), which usually shows how supremely UNimportant it is (do you really care about 10 nanoseconds' difference, when the overhead of calling the function is around 400?!-)...
Convincing programmers that micro-optimizations are essentially irrelevant has proven to be an impossible task -- even though it's been 35 years (over which computers have gotten orders of magnitude faster!) since Knuth wrote
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
which as he explained is a quote from an even older statement from Hoare. I guess everybody's totally convinced that THEIR case falls in the remaining 3%!
So instead of endlessly repeating "it doesn't matter", we (Tim Peters in particular deserves the honors there) put in the standard Python library module timeit
, that makes it trivially easy to measure such micro-benchmarks and thereby lets at least some programmers convince themselves that, hmmm, this case DOES fall in the 97% group!-)