tags:

views:

1230

answers:

5

Hi

I have some C code that stores ASCII strings in memory as a four byte length followed by the string. The string lengths are in the range 10-250 bytes.

To reduce occupancy I'd like to compress each string individually on the fly, still storing the length (of the compressed string) followed by the compressed string.

I don't want to compress at a larger scope than individual strings because any string can be read/written at any time.

What libraries/algorithms are available for doing this?

Thanks for your help. NickB

+12  A: 

ZLib is always at your service - it has a very little overhead for the cases when the string contains uncompressable data, it's relatively fast, free and can be easily integrated into C and C++ programs.

sharptooth
+3  A: 

Zlib is definitely your friend here, but be sure to perform a few tests to detect the average string length at which compression starts to be beneficial, because of the small overhead of compression headers.

For example, you might discover that under 20 characters, the compressed string is actually bigger, and therefore only compress the longer strings.

small_duck
And if you can spare 1 bit of the size field to flag whether the string is compressed or not, you don't even have to guess: just attempt to compress each string. If it gets smaller, store it compressed. If it doesn't, store it uncompressed. This is roughly what PKZIP allows (and I assume other compressed containers, it's just PKZIP is the one I happen to have implemented once). Unfortunately the size range 10-250 doesn't efficiently admit a "spare" bit on an 8-bit architecture.
Steve Jessop
+3  A: 

Why use a 4 byte length when strings are 10-250 bytes long, use a 1 byte length that will save you 3 bytes per string alone.

Is the data textual only ie 0-9 A-z or some sub set?? if so re-encode it to use that subset and save a few bits per character.

Now have a look at http://gnosis.cx/publish/programming/compression_primer.html in the Huffman encoding section and lempel-zev section.

That should get you started.

+4  A: 

I am not sure that the zlib or LZW compression approaches will work well in the case of individually compressing short strings of less than 250 bytes. Both typically require creating a fairly sizable dictionary before significant compression gains are seen.

Perhaps simple Huffman coding with a fixed encoding tree, or one shared between all instances of the strings? Also, have you seen the ZSCII encoding used to compress short strings on memory constrained microcomputers in the 80s?

link text

+2  A: 

Most compression algorithms don't work very well with short strings. Here are a few compression algorithms that are designed to compress short English text strings. While they can handle any arbitrary byte in the plaintext string, such bytes often make the "compressed" data longer than the plaintext. So it's a good idea for the compressor to store "uncompressible" data unchanged and set a "literal" flag on such data (as Steve Jessop suggested).

  • "base 40 encoding": maximum compression 3:2
  • "Zork Standard Code for Information Interchange" (ZSCII): maximum compression 3:2
  • byte pair compression: maximum compression 2:1
  • a static Huffman table shared among all the strings (as suggested out by cygil).
    • ideally, formed from the exact character frequencies of all of your actual data.
    • Varicode: maximum compression 2:1
  • PalmDoc compression (byte pair compression + a simple variant of LZ77).
David Cary