views:

240

answers:

5

What's the simplest, but efficient compression algorithm?

Deflate, lzma, etc. aren't valid options. I need something that compiles really small, like: RLE, LZX, Huffman, etc..

Note: The data is 95% ASCII text
Edit: Data is ~20kb at the moment, but i expect it to grow up to 1mb

+4  A: 

It sounds like LZO was designed to meet your requirements:

  • Decompression is simple and very fast.
  • Requires no memory for decompression.
  • Compression is pretty fast.
Greg Hewgill
Works like a charm, compressed data size is ~58% of the original
arthurprs
Update: I made some optimizations on the file structure, now compressed size is ~40% of the original
arthurprs
+2  A: 

you could try http://scz-compress.sourceforge.net/

Keith Nicholas
Looks really promising, i will definitely take a look.
arthurprs
+1  A: 

Most dictionary schemes will do nicely. Any of the LZs. We use an LZ77 varient on embedded systems for a lot of our simple compression stuff and it works beautifully with almost no memory overhead. What kind of system is compressing and what is decompressing? That will determine the type of compressor you can get away with.

Michael Dorgan
I came across 2 very good implementations, http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/os/compress.c and http://michael.dipperstein.com/lzw/ the first is really small and the compression is great
arthurprs
A: 

http://www.melvilletheatre.com/articles/cstuff/2.html

Frank Cox
great piece of code, but the strings are limited to chars between Chr(32) and Chr(127)
arthurprs
+1  A: 

This benchmark has a lot of comparisons. Check it out as it shows you also the algorithms used in the compression process.

Iulian Şerbănoiu
See also this link: http://cs.fit.edu/~mmahoney/compression/rationale.html
Iulian Şerbănoiu