I implemented a search caching results that consist of keys of type State (a class with 7 short ints) and values of type Score
(a class of 3 doubles.) Using unordered_map was at least 20 times slower than map. Why?
Edit: Darn it! My hash function was
namespace std {
size_t hash<State>::operator()(State const& s) const {
size_t retval = hash<short>()(s.s[0]);
for (int i = 1; i < R; i += 2) { // 1 3 5
int x = (static_cast<int>(s.s[i + 1]) << 16)
+ (static_cast<int>(s.s[i]));
hash_combine(retval, x);
}
}
}
I forgot to return retval
, so it was all colliding! I wish unordered_map had a hash_function_quality() function that reports the average number of collisions.
The speed of unordered_map is directly proportional to the speed of your hashing function. It is never a straight forward relationship. Case in point, if you use the simplest hashing function:
std::size_t myHash(MyObjectType _object){ return 1; }
then what you'll end up with is a collection which behaves like a list rather than a hashed container. All the items will map to a single bucket and you'll have to traverse the entire bucket until you get to the item you desire (something that could take O(N) time.)
What you need to do is look at two things:
Either of those by themselves can and will kill the performance.