I need to compress portions of our application's network traffic for performance. I presume this means I need to stay away from some of the newer algorithms like bzip2, which I think I have heard is slower.
You can use Deflater/Inflater which is built into the JDK. There are also GZIPInputStream and GZIPOutputStream, but it really depends on your exact use.
Edit:
Reading further comments it looks like the network taffic is HTTP. Depending on the server, it probably has support for compression (especially with deflate/gzip). The problem then becomes on the client. If the client is a browser it probably already supports it. If your client is a webservices client or an http client check the documentation for that package to see if it is supported.
It looks like jakarta-commons httpclient may require you to manually do the compression. To enable this on the client side you will need to do something like
.addRequestHeader("Accept-Encoding","gzip,deflate");
If the network traffic is going over HTTP, most of the various web servers/servlet containers support for negotiated zipping, e.g., mod_deflate for Apache.
Your compression algorithm depends on what your trying to optimize, and how much bandwidth you have available.
If you're on a gigibit LAN, almost any compression algorithm is going to slow your program down just a bit. If your connecting over a WAN or internet, you can afford to do a bit more compression. If you connected to a dialup, you should compress as much as it absolutely possible.
If this is a WAN, you may find hardware solutions like Riverbed's are more effective, as they work across a range of traffic, and don't require any changes to software.
I have a test case which shows the relative compression difference between Deflate, Filtered, BZip2, and lzma. Simply plug in a sample of your data, and test the timing between two machines.