Is there a significant overhead by using different versions of sha hashing (hashlib module)

Emilien picture Emilien · Feb 11, 2010 · Viewed 10.6k times · Source

The hashlib Python module provides the following hash algorithms constructors: md5(), sha1(), sha224(), sha256(), sha384(), and sha512().

Assuming I don't want to use md5, is there a big difference in using, say, sha1 instead of sha512? I want to use something like hashlib.shaXXX(hashString).hexdigest(), but as it's just for caching, I'm not sure I need the (eventual) extra overhead of 512...

Does this overhead exist, and if so, how big is it?

Answer

sttwister picture sttwister · Feb 11, 2010

Why not just benchmark it?

>>> def sha1(s):
...     return hashlib.sha1(s).hexdigest()
...
>>> def sha512(s):
...     return hashlib.sha512(s).hexdigest()
...
>>> t1 = timeit.Timer("sha1('asdf' * 100)", "from __main__ import sha1")
>>> t512 = timeit.Timer("sha512('asdf' * 100)", "from __main__ import sha512")
>>> t1.timeit()
3.2463729381561279
>>> t512.timeit()
6.5079669952392578

So on my machine, hash512 is twice as slow as sha1. But as GregS said, why would you use secure hash for caching? Try the builtin hash algorithms which should be really fast and tuned:

>>> s = "asdf"
>>> hash(s)
-618826466
>>> s = "xxx"
>>> hash(s)
943435
>>> hash("xxx")
943435

Or better yet, use the builtin Python dictionaries. Maybe you can tell us more about what you plan on caching.

EDIT: I'm thinking that you are trying to achieve something like this:

hash = hashlib.sha1(object_to_cache_as_string).hexdigest()
cache[hash] = object_to_cache

What I was refferring to by "use the builtin Python dictinoaries" is that you can simplify the above:

cache[object_to_cache_as_string] = object_to_cache

In this way, Python takes care of the hashing so you don't have to!

Regarding your particular problem, you could refer to Python hashable dicts in order to make a dictionary hashable. Then, all you'd need to do to cache the object is:

cache[object_to_cache] = object_to_cache

EDIT - Notes about Python3

Python 3.3 introduces hash randomization, which means that computed hashes might be different across different processes, so you should not rely on the computed hash, unless setting the PYTHONHASHSEED environment variable to 0.

References: - https://docs.python.org/3/reference/datamodel.html#object.hash - https://docs.python.org/3/using/cmdline.html#envvar-PYTHONHASHSEED