The book Numerical Recipes offers a method to calculate 64bit hash codes in order to reduce the number of collisions.
The algorithm is shown at http://www.javamex.com/tutorials/collections/strong_hash_code_implementation_2.shtml and is copied here for reference:
private static final createLookupTable() {
byteTable = new long[256];
long h = 0x544B2FBACAAF1684L;
for (int i = 0; i < 256; i++) {
for (int j = 0; j < 31; j++) {
h = (h >>> 7) ^ h;
h = (h << 11) ^ h;
h = (h >>> 10) ^ h;
}
byteTable[i] = h;
}
return byteTable;
}
public static long hash(CharSequence cs) {
long h = HSTART;
final long hmult = HMULT;
final long[] ht = byteTable;
final int len = cs.length();
for (int i = 0; i < len; i++) {
char ch = cs.charAt(i);
h = (h * hmult) ^ ht[ch & 0xff];
h = (h * hmult) ^ ht[(ch >>> 8) & 0xff];
}
return h;
}
My questions:
1) Is there a formula to estimate the probability of collisions taking into account the so-called Birthday Paradox?
2) Can you estimate the probability of a collision (i.e two keys that hash to the same value)? Let's say with 1,000 keys and with 10,000 keys?
EDIT: rephrased/corrected question 3
3) Is it safe to assume that a collision of a reasonable number of keys (say, less than 10,000 keys) is so improbable so that if 2 hash codes are the same we can say that the keys are the same without any further checking? e.g.
static boolean equals(key1, key2) {
if (key1.hash64() == key2.hash64())
return true; // probability of collision so low we don't need further check
return false;
}
This is not for security, but execution speed is imperative so avoiding further checks of the keys will save time. If the probability is so low, say less than (1 in 1 billion for 100,000 keys) it will probably be acceptable.
TIA!
Is there a formula to estimate the probability of collisions taking into account the so-called Birthday Paradox?
Using the Birthday Paradox formula simply tells you at what point you need to start worrying about a collision happening. This is at around Sqrt[n]
where n
is the total number of possible hash values. In this case n = 2^64
so the Birthday Paradox formula tells you that as long as the number of keys is significantly less than Sqrt[n] = Sqrt[2^64] = 2^32
or approximately 4 billion, you don't need to worry about collisions. The higher the n
, the more accurate this estimation. In fact the probability p(k)
that a collision will occur with k
keys approaches a step function as n
gets larger, where the step occurs at k=Sqrt[n]
.
Can you estimate the probability of a collision (i.e two keys that hash to the same value)? Let's say with 1,000 keys and with 10,000 keys?
Assuming the hash function is uniformly distributed it's straightforward to derive the formula.
p(no collision for k keys) = 1 * (n-1)/n * (n-2)/n * (n-3)/n * ... * (n-(k-1))/n
That formula directly follows from starting with 1 key: The probability of no collision with 1 key is of course 1. The probability of no collision with 2 keys is 1 * (n-1)/n
. And so on for all k
keys. Conveniently, Mathematica has a Pochhammer[] function for this purpose to express this succinctly:
p(no collision for k keys) = Pochhammer[n-(k-1),k]/n^k
Then, to calculate the probability that there is at least 1 collision for k
keys, subtract it from 1:
p(k) = 1 - p(no collision for k keys) = 1 - Pochhammer[n-(k-1),k]/n^k
Using Mathematica, one can calculate for n=2^64
:
Is it safe to assume that a collision of a reasonable number of keys (say, less than 10,000 keys) is so improbable so that if 2 hash codes are the same we can say that the keys are the same without any further checking?
To answer this precisely depends upon the probability that 2 of the 10,000 keys were identical. What we are looking for is:
p(a=b|h(a)=h(b)) = The probability that a=b given h(a)=h(b)
where a
and b
are keys (possibly identical) and h()
is the hashing function. We can apply Bayes' Theorem directly:
p(a=b|h(a)=h(b)) = p(h(a)=h(b)|a=b) * p(a=b) / p(h(a)=h(b))
We immediately see that p(h(a)=h(b)|a=b) = 1
(if a=b
then of course h(a)=h(b)
) so we get
p(a=b|h(a)=h(b)) = p(a=b) / p(h(a)=h(b))
As you can see this depends upon p(a=b)
which is the probability that a
and b
are actually the same key. This depends upon how the group of 10,000 keys were selected in the first place. The calculations for the previous two questions assume all keys are distinct, so more information on this scenario is needed to fully answer it.