That is correct, but it isn't as amazing as it sounds.
There are two pieces of data that must be transferred to decode a huffman-encoded byte stream. The encoded stream (of course) is required, but so is the dictionary that will allow you to properly build your huffman tree to perform the decoding.
Using larger tokens to encode your data will always result in a smaller encoded stream. Unfortunately, unless you have data with some pretty specific and special characteristics, larger tokens will also cause your dictionary size to increase surprisingly. The degenerate case (referred to by Mark Byers' answer) would result in the entire, uncompressed data stream being a single token and the encoded stream being a single bit, resulting in absolutely no compression.
Thus, Huffman coding (like almost everything) is an exercise in tradeoffs. Striking a balance between efficiency of the encoded file and the size of the dictionary can be tricky. I've never performed the actual analysis based on data characteristics to find out what various ideal token sizes might be, but I think bytes tend to be used because it's a simple point to divide on and will generally result in some real compression. I know back in college I did it once as an exercise with four byte tokens, but I couldn't honestly say that it was somehow better than one byte tokens.
Of course, it's also possible to cheat and, instead of building the dictionary dynamically to get genuinely greedy compression, you can use a pre-built tree and compress with that. You'd then avoid transmitting the dictionary, but the decoder would also have to have the same dictionary to decode the data.