views:

2025

answers:

9

A long time ago, I bought a data structures book off the bargain table for $1.25. In it, the explanation for a hashing function said that it should ultimately mod by a prime number because of "the nature of math".

What do you expect from a $1.25 book?

Anyway, I've had years to think about the nature of math, and still can't figure it out.

Is the distribution of numbers truly more even when there are a prime number of buckets? Or is this an old programmers tale that everyone excepts because, everybody else accepts it?

+2  A: 

http://computinglife.wordpress.com/2008/11/20/why-do-hash-functions-use-prime-numbers/

Pretty clear explanation, with pictures too.

Edit: As a summary, primes are used because you have the best chance of obtaining a unique value when multiplying values by the prime number chosen and adding them all up. For example given a string, multiplying each letter value with the prime number and then adding those all up will give you its hash value.

A better question would be, why exactly the number 31?

AlbertoPL
Although, I think a summary would be helpful, in case that site is ever dead, some remnant of its content will be saved here on SO.
Thomas Owens
+1 for linking, but the article doesn't exactly delve into the math.
Aiden Bell
The article does not explain why, but says "Researchers found that using a prime of 31 gives a better distribution to the keys, and lesser no of collisions. No one knows why..." Funny, asking the same question as me in effect.
theschmitzer
> A better question would be, why exactly the number 31?If you mean why is the number 31 used, then the article you point tells you why, ie because it is quick to multiple by and cos tests show it is the best one to use. The other popular multiplier I have seen is 33 which lends weight to the theory that the speed issue was (at least initially) an important factor.If you mean, what is it about 31 that makes it better in the tests, then I'm afraid I don't know.
sgmoore
33 is not prime.
starblue
Exactly, so the only reason that it could have been used as a multiplier was because it was easy to multiply by. (When I say I have seen 33 used as a multiplier, I don't mean recently, this was probably decades ago, and possible before a lot of analysis was done on hashing).
sgmoore
They probably used it because the factors of 33 (besides itself and 1) are 3 and 11, both primes.
AlbertoPL
Well obviously the less factors the better, but if you knew that, why pick something that has an extra factor. I assumed it was picked by someone who did not realise the importance of primes. Maybe someone thought addition would be quicker than subtraction and hence x33 would be faster than x31.
sgmoore
AFAIK, 33 is fine provided that the number of buckets isn't divisible by 3 or 11. So maybe 33 was used for hashtables where the implementation always chose suitable array sizes, rather than for general use as a hash function with unknown purpose.
Steve Jessop
+6  A: 

The reason that prime numbers are used is so that when you're repeating over a set space, you're going to provide an even distribution across your hash space.

For example, over the space of 1 to 52, using 31 as key:

 s = 7 + key % 52 = 34
 s = 34 + key % 52 = 13
 s = 13 + key % 52 = 44
 s = 44 + key % 52 = 23
 ...
 s = 49 + key % 52 = 28
 s = 28 + key % 52 = 7

As you can see, the numbers will eventually loop through the entire space of 1 to 52 (a modulo ring.) The prime number ensures that all values are hit in that space.

Gavin Miller
Yes, a modulo ring. As in used in cryptographic hash functions.
AlbertoPL
A: 

Primes are unique numbers. They are unique in that, the product of a prime with any other number has the best chance of being unique (not as unique as the prime itself of-course) due to the fact that a prime is used to compose it. This property is used in hashing functions.

Given a string “Samuel”, you can generate a unique hash by multiply each of the constituent digits or letters with a prime number and adding them up. This is why primes are used.

However using primes is an old technique. The key here to understand that as long as you can generate a sufficiently unique key you can move to other hashing techniques too. Go here for more on this topic about http://www.azillionmonkeys.com/qed/hash.html

http://computinglife.wordpress.com/2008/11/20/why-do-hash-functions-use-prime-numbers/

What does "not as unique" mean?
Beska
It means: be quiet clown.
hahahah.... actually doesn't the product of 2 primes have a better chance of being 'unique' than the product of a prime and any other number?
SpaceghostAli
+3  A: 

Just to provide an alternate viewpoint there's this site:

http://www.codexon.com/posts/hash-functions-the-modulo-prime-myth

Which contends that you should use the largest number of buckets possible as opposed to to rounding down to a prime number of buckets. It seems like a reasonable possibility. Intuitively, I can certainly see how a larger number of buckets would be better, but I'm unable to make a mathematical argument of this.

Falaina
Larger number of buckets means less collisions: See the pigeonhole principle.
Unknown
@Unknown: I don't believe that's true. Please correct me if I'm wrong, but I believe applying the pigeonhole principle to hash tables only allows you to assert that there WILL be collisions if you have more elements than bins, not to draw any conclusions on the amount or density of collisions. I still believe that the larger number of bins is the correct route, however.
Falaina
If you assume that the collisions are for all intents and purposes random, then by the birthday paradox a larger space (buckets) will reduce the probability of a collision occurring.
Unknown
A: 

When you work with mod p (where p is prime) you essentially define a finite field (a set that has the operations: addition, subtraction, multiplication and division by nonzero elements defined) if you work mod n where n is not prime nonzero division is not always defined

SpaceghostAli
+1  A: 

It depends on the choice of hash function.

Many hash functions combine the various elements in the data by multiplying them with some factors modulo the power of two corresponding to the word size of the machine (that modulus is free by just letting the calculation overflow).

You don't want any common factor between a multiplier for a data element and the size of the hash table, because then it could happen that varying the data element doesn't spread the data over the whole table. If you choose a prime for the size of the table such a common factor is highly unlikely.

On the other hand, those factors are usually made up from odd primes, so you should also be safe using powers of two for your hash table (e.g. Eclipse uses 31 when it generates the Java hashCode() method).

starblue
+13  A: 

Usually a simple hash function works by taking the "component parts" of the input (characters in the case of a string), and multiplying them by the powers of some constant, and adding them together in some integer type. So for example a typical (although not especially good) hash of a string might be:

(first char) + k * (second char) + k^2 * (third char) + ...

Then if a bunch of strings all having the same first char are fed in, then the results will all be the same modulo k, at least until the integer type overflows.

[As an example, Java's string hashCode is eerily similar to this - it does the characters reverse order, with k=31. So you get striking relationships modulo 31 between strings that end the same way, and striking relationships modulo 2^32 between strings that are the same except near the end. This doesn't seriously mess up hashtable behaviour.]

A hashtable works by taking the modulus of the hash over the number of buckets.

It's important in a hashtable not to produce collisions for likely cases, since collisions reduce the efficiency of the hashtable.

Now, suppose someone puts a whole bunch of values into a hashtable that have some relationship between the items, like all having the same first character. This is a fairly predictable usage pattern, I'd say, so we don't want it to produce too many collisions.

It turns out that "because of the nature of maths", if the constant used in the hash, and the number of buckets, are coprime, then collisions are minimised in some common cases. If they are not coprime, then there are some fairly simple relationships between inputs for which collisions are not minimised. All the hashes come out equal modulo the common factor, which means they'll all fall into the 1/n th of the buckets which have that value modulo the common factor. You get n times as many collisions, where n is the common factor. Since n is at least 2, I'd say it's unacceptable for a fairly simple use case to generate at least twice as many collisions as normal. If some user is going to break our distribution into buckets, we want it to be a freak accident, not some simple predictable usage.

Now, hashtable implementations obviously have no control over the items put into them. They can't prevent them being related. So the thing to do is to ensure that the constant and the bucket counts are coprime. That way you aren't relying on the "last" component alone to determine the modulus of the bucket with respect to some small common factor. As far as I know they don't have to be prime to achieve this, just coprime.

But if the hash function and the hashtable are written independently, then the hashtable doesn't know how the hash function works. It might be using a constant with small factors. If you're lucky it might work completely differently and be nonlinear. If the hash is good enough, then any bucket count is just fine. In the extreme, for a cryptographically secure hash even generating a set of inputs which produces a distribution modulo 72 that's "more uneven than random", is computationally infeasible, and 72 has loads of factors. But a paranoid hashtable can't assume a good hash function, so should use a prime number of buckets. Similarly a paranoid hash function should use a largeish prime constant, to reduce the chance that someone uses a number of buckets which happens to have a common factor with the constant.

In practice, I think it's fairly normal to use a power of 2 as the number of buckets. This is convenient and saves having to search around or pre-select a prime number of the right magnitude. So you rely on the hash function not to use even multipliers, which is generally a safe assumption. But you can still get occasional bad hashing behaviours based on hash functions like the one above, and prime bucket count could help further.

Putting about the principle that "everything has to be prime" is as far as I know a sufficient but not a necessary condition for good distribution over hashtables. It allows everybody to interoperate without needing to assume that the others have followed the same rule.

Steve Jessop
A: 

For a hash function it's not only important to minimize colisions generally but to make it impossible to stay with the same hash while chaning a few bytes.

Say you have an equation: (x + y*z) % key = x with 0<x<key and 0<z<key. If key is a primenumber n*y=key is true for every n in N and false for every other number.

An example where key isn't a prime example: x=1, z=2 and key=8 Because key/z=4 is still a natural number, 4 becomes a solution for our equation and in this case (n/2)*y = key is true for every n in N. The amount of solutions for the equation have practially doubled because 8 isn't a prime.

If our attacker already knows that 8 is possible solution for the equation he can change the file from producing 8 to 4 and still gets the same hash.

Christian
+2  A: 

The first thing you do when inserting/retreiving from hash table is to calculate the hashCode for the given key and then find the correct bucket by trimming the hashCode to the size of the hashTable by doing hashCode % table_length. Here are 2 'statements' that you most probably have read somewhere

  1. If you use a power of 2 for table_length, finding (hashCode(key) % 2^n ) is as simple and quick as (hashCode(key) & (2^n -1)). But if your function to calculate hashCode for a given key isn't good, you will definitely suffer from clustering of many keys in a few hash buckets.
  2. But if you use prime numbers for table_length, hashCodes calculated could map into the different hash buckets even if you have a slightly stupid hashCode function.

And here is the proof.

If suppose your hashCode function results in the following hashCodes among others {x , 2x, 3x, 4x, 5x, 6x...}, then all these are going to be clustered in just m number of buckets, where m = table_length/GreatestCommonFactor(table_length, x). (It is trivial to verify/derive this). Now you can do one of the following to avoid clustering

Make sure that you don't generate too many hashCodes that are multiples of another hashCode like in {x, 2x, 3x, 4x, 5x, 6x...}.But this may be kind of difficult if your hashTable is supposed to have millions of entries. Or simply make m equal to the table_length by making GreatestCommonFactor(table_length, x) equal to 1, i.e by making table_length coprime with x. And if x can be just about any number then make sure that table_length is a prime number.

From - http://srinvis.blogspot.com/2006/07/hash-table-lengths-and-prime-numbers.html