views:

1180

answers:

5

The Adler-32 checksum algorithm does sums modulo 65521. I know that 65521 is the largest prime number that fits in 16 bits, but why is it important to use a prime number in this algorithm?

(I'm sure the answer will seem obvious once someone tells me, but the number-theory parts of my brain just aren't working. Even without expertise in checksum algorithms, a smart person who reads http://en.wikipedia.org/wiki/Fletcher%27s_checksum can probably explain it to me.)

A: 

http://en.wikipedia.org/wiki/Fermat's_little_theorem

eulerfx
How is Fermat's little theorem relevant? This algorithm adds, not exponentiates.
ShreevatsaR
+2  A: 

Long story short:

The modulo of a prime has the best bit-shuffeling properties, and that's exactly what we want for a hash-value.

Nils Pipenbrinck
What is it about the modulo of a prime that makes it a better bit shuffler?
Kristopher Johnson
+13  A: 
Unknown
You're only considering the simple case of sums modulo a number. Obviously there will be fewer collisions when you take random numbers and mod them to a larger number, prime or not. For something useful, compare with multiplication instead, or, for this question, compare the Fletcher/Adler-32 checksum algorithm.
ShreevatsaR
Oops, the above comment was made in haste; I see you did compare Fletcher v/s Adler-32. Good work :) It is not surprising that simply taking mod a smaller number gives fewer collisions; there are more buckets after all. I do object to your statement that "We know much less about prime numbers than composite ones. Therefore people like Knuth started using them", though. You make it sound like voodoo. Knuth doesn't do voodoo. :P I'm pretty certain what's going on here is people hurriedly reading TAOCP and misinterpreting it.
ShreevatsaR
@ShreevatsaR: well many people would like to believe Knuth is infallible, but Paul Hsieh's quote seems to indicate that he rewrote it to remove the mod P: "I showed him examples where modulo by prime had extremely bad collisions, but this did not seem to sway him at all. It seems Knuth’s rewrites have come too late."
Unknown
Yes, but it seems he's as vague about what parts of Knuth and what rewrites, as his boss presumably was. It's much more likely that the boss just pointed vaguely and said "look, primes!", and Paul Hsieh is just vaguely saying "look, 30-year-old book". :)(I upvoted this answer, BTW.)
ShreevatsaR
@ShreevatsaR: thanks for the upvote. Also good find on the Maxino article that I somehow missed. I just needlessly duplicated a finding.
Unknown
The meaning of the collisions graph isn't clear to me. Right now it looks like 'buckets per collision' which can't be right.
Zr40
@Zr40 it means how many buckets have X collisions.
Unknown
+3  A: 

The Adler-32 algorithm is to compute

A = 1 + b1 + b2 + b3 + ...

and

B = (1 + b1) + (1 + b1 + b2) + (1 + b1 + b2 + b3) + ... = 1 + b1 + 2 * b2 + 3 * b3 + ...

and report them modulo m. When m is prime, the numbers modulo m form what mathematicians call a field. Fields have the handy property that for any nonzero c, we have a = b if and only if c * a = c * b. Compare the times table modulo 6, which is not a prime, with the times table modulo 5, which is:

* 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 1 2 3 4 5
2 0 2 4 0 2 4
3 0 3 0 3 0 3
4 0 4 2 0 4 2
5 0 5 4 3 2 1

* 0 1 2 3 4
0 0 0 0 0 0
1 0 1 2 3 4
2 0 2 4 1 3
3 0 3 1 4 2
4 0 4 3 2 1

Now, the A part gets fooled whenever we interchange two bytes -- addition is commutative after all. The B part is supposed to detect this kind of error, but when m is not a prime, more locations are vulnerable. Consider an Adler checksum mod 6 of

1 3 2 0 0 4

We have A = 4 and B = 1. Now consider swapping b2 and b4:

1 0 2 3 0 4

A and B are unchanged because 2 * 3 = 4 * 0 = 2 * 0 = 4 * 3 (modulo 6). One can also swap 2 and 5 to the same effect. This is more likely when the times table is unbalanced -- modulo 5, these changes are detected. In fact, the only time a prime modulus fails to detect a single swap is when two equal indexes mod m are swapped (and if m is big, they must be far apart!).^ This logic can also be applied to interchanged substrings.

The disadvantage in using a smaller modulus is that it will fail slightly more often on random data; in the real world, however, corruption is rarely random.

^ Proof: suppose that we swap indexes i and j with values a and b. Then a*i + b*j = a*j + b*i, so a*i - a*j + b*j - b*i = 0 and (a - b)*(i - j) = 0. Since a field is an integral domain, it follows that a = b (values are congruent) or i = j (indexes are congruent).

EDIT: the website that Unknown linked to (http://www.zlib.net/zlib_tech.html) makes it clear that the design of Adler-32 was not at all principled. Because of the Huffman code in a DEFLATE stream, even small errors are likely to change the framing (because it's data-dependent) and cause large errors in the output. Consider this answer a slightly contrived example for why people ascribe certain properties to primes.

Dave
Sorry, but your conclusion is incorrect. 1,3,2,0,5,4 -> 1,3,2,5,0,4 % 5 under equation B of adler-32 gives 3 for both. While using mod 6, it gives 1 and 0 respectively. The indexes here swapped are 4 and 5, both of which are not "two equal indexes mod m", therefore your conjecture is disproved.
Unknown
Not disproved -- they're congruent mod 5. I changed the example.
Dave
(m should be larger than the maximum input value.)
Dave
BTW, thanks for the downvotes on my unrelated questions? http://imgur.com/xTa3t.png
Unknown
+1  A: 

For perfectly random data, the more buckets the better.

Let's say the data is non-random in some way. Now, the only way that the non-randomness could affect the algorithm is by creating a situation where some buckets have a higher probability of being used than others.

If the modulo number is non-prime, then any pattern affecting one of the numbers making up the modulo could affect the hash. So if you're using 15, a pattern every 3 or 5 as well as every 15 could cause collisions, while if you're using 13 the pattern would have to be every 13 to cause collisions.

65535 = 3*5*17*257, so a pattern involving 3 or 5 could cause collisions using this modulo-- if multiples of 3 were much more common for some reason, for instance, then only the buckets which were multiples of 3 would be put to good use.

Now I'm not sure whether, realistically, this is likely to be an issue. It would be good to determine the collision rate empirically with actual data of the type one wants to hash, not random numbers. (For instance, would numerical data involving Benford's Law or some such irregularity cause patterns that would affect this algorithm? How about using ASCII codes for realistic text?)

Erika