tags:

views:

318

answers:

6

The implementation of file compressors that I saw was always compressing arrays of bytes.

But it can compress arrays of shorts instead or even ints.

If each symbol in a Huffman binary tree represents a byte we can compress at the most 8 bits in a single bit when it is optimal.

If each symbol in a Huffman tree represents a short we can compress at the most 16 bits in a sigle bit when it is optimal.

Is it correct?

Can someone update wikipedia with this additional Huffman encode information?

A: 

Absolutely correct. Anyway, there is little use (other than intellectual challenge or training) in implementing compression algorithms, since almost every language has them in their standard lib.

ammoQ
I disagree that there is little use. In certain circumstances there is little use, in others (that are much fewer in number), there is great utility in it.
San Jacinto
there is little use means that 8 bits input size is a good choice for best compression ?
Arabcoder
A huffman encoder/decoder is typically my implementation of choice when learning a new language to prove to myself that I have some grasp of the language's fundamental constructs. :)
Greg D
+1  A: 

Arabcoder, your assumptions are correct.

As a side note: A lot of 8 bit huffman codecs don't only compress the 256 natural symbols of a byte. They also have one or more special symbols. These are used to detect the end of the huffman stream or to switch from one huffman tree to another...

Nils Pipenbrinck
+4  A: 

The optimal compression is to treat your whole file as a single token, and compress it using the zero-length huffman code. This gives you an infinite compression ratio. Unfortunately the description of the huffman code will be quite large.

Mark Byers
apparently this is the answer that I want...
Arabcoder
ROFL The description would be, ooo, slightly larger than the source file perhaps?
Lazarus
Well the smallest huffman token is 1 bit. The description of the only token would be the size of the whole file. So you're looking at an encoded size of at LEAST the original file size, plus 1 bit. lol
Matt Joiner
A: 

Huffman is quite old compression method and is not used as such. It is included in basic compression methods of taught in a course. Considering many files like JPEG, PDF or JAR are alredy compressed running plain Huffman compression won't give you much.

I am saying this because I did this. And this applies even when you optimize symbol table a lot.

Xolve
I am not talking about huffman only but range coder and arithmetic code also, I am interested in the input data bit number for best compression
Arabcoder
You say "Huffman is quite old compression method and is not used as such" and then go on to mention 3 file formats that use a suite of compression algorithms including Huffman! JPEG's CCITT 1-dimensional encoding is a form of Huffman encoding, and (de)flate, used in both PDF and JAR^H^H^HZIP file formats uses Huffman encoding.
Laurence Gonsalves
Ha, yeah. Old doesn't mean bad. Huffman remains a very effective greedy lossless algorithm.
Greg D
+4  A: 

That is correct, but it isn't as amazing as it sounds.

There are two pieces of data that must be transferred to decode a huffman-encoded byte stream. The encoded stream (of course) is required, but so is the dictionary that will allow you to properly build your huffman tree to perform the decoding.

Using larger tokens to encode your data will always result in a smaller encoded stream. Unfortunately, unless you have data with some pretty specific and special characteristics, larger tokens will also cause your dictionary size to increase surprisingly. The degenerate case (referred to by Mark Byers' answer) would result in the entire, uncompressed data stream being a single token and the encoded stream being a single bit, resulting in absolutely no compression.

Thus, Huffman coding (like almost everything) is an exercise in tradeoffs. Striking a balance between efficiency of the encoded file and the size of the dictionary can be tricky. I've never performed the actual analysis based on data characteristics to find out what various ideal token sizes might be, but I think bytes tend to be used because it's a simple point to divide on and will generally result in some real compression. I know back in college I did it once as an exercise with four byte tokens, but I couldn't honestly say that it was somehow better than one byte tokens.

Of course, it's also possible to cheat and, instead of building the dictionary dynamically to get genuinely greedy compression, you can use a pre-built tree and compress with that. You'd then avoid transmitting the dictionary, but the decoder would also have to have the same dictionary to decode the data.

Greg D
A: 

By the way a Huffman coding is always the same or worse as an arithmetic encoding. Huffman coding was used for many years because arithmetic encoding was patented until recently, and because huffman is a little bit easier to implement. Today when designing a new compression algorithm there isn't much reason to use Huffman anymore. Arithmetic should always be used instead.

Foo Bar