Accurate timing of functions in python

Atilio Jobson picture Atilio Jobson · May 20, 2009 · Viewed 84.5k times · Source

I'm programming in python on windows and would like to accurately measure the time it takes for a function to run. I have written a function "time_it" that takes another function, runs it, and returns the time it took to run.

def time_it(f, *args):
    start = time.clock()
    f(*args)
    return (time.clock() - start)*1000

i call this 1000 times and average the result. (the 1000 constant at the end is to give the answer in milliseconds.)

This function seems to work but i have this nagging feeling that I'm doing something wrong, and that by doing it this way I'm using more time than the function actually uses when its running.

Is there a more standard or accepted way to do this?

When i changed my test function to call a print so that it takes longer, my time_it function returns an average of 2.5 ms while the cProfile.run('f()') returns and average of 7.0 ms. I figured my function would overestimate the time if anything, what is going on here?

One additional note, it is the relative time of functions compared to each other that i care about, not the absolute time as this will obviously vary depending on hardware and other factors.

Answer

Alex Martelli picture Alex Martelli · May 20, 2009

Use the timeit module from the Python standard library.

Basic usage:

from timeit import Timer

# first argument is the code to be run, the second "setup" argument is only run once,
# and it not included in the execution time.
t = Timer("""x.index(123)""", setup="""x = range(1000)""")

print t.timeit() # prints float, for example 5.8254
# ..or..
print t.timeit(1000) # repeat 1000 times instead of the default 1million