views:

88

answers:

3

I'm looking for a forward error-correcting code that is relatively easy/fast to encode on a microcontroller; decode will be done on a PC so it can be more complicated.

I don't know that much about error-correcting codes and except for the simple Hamming codes they seem to all be more complicated than I can handle.

Any recommendations?

edit: I'm going to cut things short and accept Carl's answer... I guess there were two things I didn't mention:

(1) I don't strictly need the error correction, it's just advantageous for me, and I figured that there might be some error correction algorithm that was a reasonable benefit for a minimal cost. Hamming codes are probably about the right fit and even they seem like they might be too costly for my encoding application.

(2) The greater advantage than the error correction itself is the ability to resync properly to packets that follow an error. (if I get out of sync for a long time, that's bad) So I think it's just better if I keep things simple.

+3  A: 

The problem with error correcting codes is that they'll let you recover from single bit or maybe 2 bit errors, but usually not detect or patch up major damage.

Thus, my recommendation would be instead to divide your data streams up into blocks (1 KB, 10 KB, ... 1 MB at most) and calculate a checksum for each block. Then, when the data arrives on the other CPU, you can ascertain whether it is correct and request retransmission of that block if not. So the receiving computer would either acknowledge and wait for the next block, or negative-acknowledge and expect a re-send.

Yes, we're implementing a subset of TCP/IP here. But there's a reason this protocol was so successful: It works!

For a checksum, I'd recommend CRC-32. It requires a table of (I think) 256 32-bit numbers and some reasonably easy computation (array indexing, OR and XOR, mostly) so it's fairly easy for a "dumb" CPU to compute.

Carl Smotricz
in other words, you're advocating against forward error correction, and instead to a more conventional "interactive" protocol with checksums and acknowledges and what-not. The use of explicit acknowledges in this application is not practical (loop latency would be intolerable compared to the throughput I need), so it's not what I'm looking for, but advice like this is useful for other applications.
Jason S
Oh, OK. That changes things a bit. In my experience, and I have indeed programmed IO on embedded microcomputer systems, communication either goes flawlessly or not at all. Unless you're in a very noisy environment, you won't get the kind of errors handled by single-bit correction. I know this is still not helping with your original question... I'll go see what I can dig up.
Carl Smotricz
OK, I've looked at the Wikipedia articles and references... I could probably implement one of the proposed algorithms, but it would be a pain in the neck. Do you have enough bandwidth that you can afford to do something stupid and simple? Send your data in blocks, and send each block 3 times. For every bit, best 2 out of 3 wins. A complete forward ECC algorithm, described in 2 sentences :)
Carl Smotricz
I wrote some code that interfaced with a radar detector. On my bench supply at my desk the comms was perfect, but it the car there was all kinds of noise. After analyzing the data it looked like every packet was being sent twice. In the end I just comparend the last two packets and if they were the same, I considered it valid. This approach worked well enough that I didn't see any glitches and the data was sent fast enough that it wouldn't get too out of date.
Dolphin
Hmmm. You both bring up interesting points. I guess what I'm looking for is something that's slightly better than CRC in that it allows some error correction, isn't computationally expensive on the source end, and doesn't interfere with my throughput needs. (sending 3x is too expensive for bandwidth usage) It doesn't have to be perfectly bulletproof, and I was just looking to see if something out there existed already (maybe it doesn't, in which case a CRC is probably good enough).
Jason S
(bargaining like on a Turkish bazaar): I'll make you a deal! Send your data blocks just twice, but with a CRC (that's just 4 bytes) on each. If one block is mangled and the other is good, you can recover perfectly. Slightly less failsafe than 3 blocks but a fighting chance. Success will depend on how noise timing correlates to block lengths.
Carl Smotricz
Heh, we're still coming from different viewpoints here. A 4-byte CRC eats up too much communication bandwidth; 16 bits of error detection/correction is really at about the sweet spot. No way I can send data twice. Maybe I'll ask a separate question with some more specifics, but this site doesn't really lend itself to back-and-forth discussions.
Jason S
I'm sorry I wasn't of more help, then. Good luck!
Carl Smotricz
well, +1 for the effort. :-)
Jason S
A: 

http://en.wikipedia.org/wiki/Low-density%5Fparity-check%5Fcode

or probably better

http://en.wikipedia.org/wiki/Turbo%5Fcode

ralu
um... those are rather complicated. (especially turbo codes)
Jason S
There is no such ting as free beer. You can either have better error code or higher performance. You did not mention how many bits do you expect to loss.
ralu
+2  A: 

I haven't quite gotten straight how much overhead you can afford. In your comment you say a 16-bit error detection/correction code is about right, but you don't specify how large of a block you're thinking of attaching that to. To be meaningful, you should probably express the allowable overhead as a percentage. 16 bits of error correction for 64 bits of data is a lot different from 16 bits of error correction of a kilobyte of data..

If you can afford something like 15-20% overhead or so, you can probably use a convolutional code with Viterbi decoder. This is highly assymetrical -- the convolutional coder is quite simple (basically a shift register, with output taps leading to XOR's). A really large one might use a 16-bit register with a half dozen or so XOR's.

Fortunately you have a heavier-duty computer to handle the decoding, because a Viterbi decoder can be a fearsome beast. Especially as you use a larger encoder to reduce the overhead, the size of the decoder explodes. The size of the decoder is exponential with respect to the size of the code group.

Turbo codes were mentioned. These can make better use of available bandwidth than convolutional codes with Viterbi decoders -- but they use a considerably more complex encoder -- a minimum of two convolutional coders of a specific type (recursive systematic convolutional encoders). As such, they don't seem to fit your specification as well.

Jerry Coffin
re: overhead: not much, either in data or computation. packets are short, 8-16 bytes typical, and I don't want to see more than 16 bits per packet overhead. Computational complexity of having a code unit different than a packet unit (e.g. multiple packets per code or multiple codes per packet) is probably unacceptable. And unfortunately a hardware solution is out, so even a convolutional coder, if I understand you right, is probably too much work in software. But +1 for some good explanation + suggestions.
Jason S