I have a hypothetical situation of sending data units, each of a thousand bytes. Failure rate is rare but when a error does occur it is less likely to be a single bit error and more likely to be an error in a few bits in a row.
At first I thought of using a checksum, but apparently that can miss bit errors larger than a single bit. A parity check won't work either so CRC might be the best option.
Is using a Cyclic Redundancy Check on a thousand bytes efficient? Or are there other methods that would work better?