views:

496

answers:

4

I have a hypothetical situation of sending data units, each of a thousand bytes. Failure rate is rare but when a error does occur it is less likely to be a single bit error and more likely to be an error in a few bits in a row.

At first I thought of using a checksum, but apparently that can miss bit errors larger than a single bit. A parity check won't work either so CRC might be the best option.

Is using a Cyclic Redundancy Check on a thousand bytes efficient? Or are there other methods that would work better?

+3  A: 

Cyclic Redundancy Checks (CRCs) are popular specifically because of their efficiency at detecting multiple bit errors with a guaranteed accuracy.

There are different designs to generate CRC polynomials where the trade-off is accuracy vs. computational complexity. In your case, you can choose the "fastest" one that meets your requirements for accuracy.

You might want to start with this Wikipedia article on the Cyclic Redundancy Check.

Enjoy,

Robert C. Cartaino

Robert Cartaino
Thanks, I was just looking for advice on the efficiency because I couldn't find it anywhere.
GreenRails
A: 

It's normal to use a CRC. I'm not certain what you mean by 'efficiency', but I think that sometimes the CRC is implemented in hardware (e.g. on the Ethernet card). Otherwise you may find 'optimized' implementations (using a lookup table).

ChrisW
A: 

CRC is covered in another question here
When is CRC more appropriate to use than MD5/SHA1?
It is suitable for detecting random errors and easy to implement.

nik
A: 

How big are your disk sectors? Probably at least 512 bytes. And CRC is a time-honored scheme for hardware-level disk ECC.

The stock CRC polynomial algorithms are quite effective for small numbers of bit errors. The exact precision is mathematically computable. CRC is also highly efficient to do in hardware where a relatively small number of gates and shift registers can manage the job on the fly.

Tim Holloway