Maybe I'm just not seeing it, but CRC32 seems either needlessly complicated, or insufficiently explained anywhere I could find on the web.
I understand the gist of it is that it is the remainder from a non-carry based arithmitic division of the message value, divided by the polynomial, but the actual implementation of it escapes me. I've read A Painless Guide To CRC Error Detection Algorithms, and I must say it was not painless. It goes over the theory rather well, but the author never gets to a simple "This is it." He does say what the parameters are for the standard CRC-32 algorithm is, but he neglects to actually lay out clearly how you get to it. The part that gets me is when he says "this is it" and then adds on, "oh by the way, it can be reversed or started with different initial conditions," and doesn't give a clear answer of what the final way of calculating a CRC32 checksum given all of the changes he just added.
Anyway, besides that, does somebody have a simple explanation of how it is calculated. I attempted to code in C how the table is formed, and it is included below:
for(i=0;i<256;i++)
{
temp=i;
for(j=0;j<8;j++)
{
if(temp & 1)
{
temp>>=1;
temp ^=0xEDB88320;
}
else
{
temp>>=1;
}
}
testcrc[i]=temp;
But this seems to generate values inconsistent with values I have found elsewhere on the internet. I could use the values I found, but I wanted to understand how they arrived at them.
Any help in clearing up these incredibly confusing numbers would be very appreciated.