A few things that I've found cause problems with rolling your own compression.
First. Some situation causes a complete change to how the response is handled (Server.Transfer, an HTTP module deferring to another HTTP module) may clear the headers, but keep the stream. Fiddler will quickly tell you if this is the case. One possibility is that this happens when you go to your error response, and an error is happening in the FF case. Forcibly decompressing the stream yourself should help diagnose here.
Conversely, a sequence of events could have led to the headers and/or the compression being doubled-up, so you end up sending a gzip of a gzip and similar. Worse yet, the filter may have been changed part-way through the response.
Third. Just putting in a DeflateStream or GZipStream as the filter does not correctly handle the case where chunked encoding is used (buffering is off, HttpResponse.Flush() is called, or a response larger than the largest allowed buffer size is sent). The following stream class handles this case correctly (it's the override of Flush()
that does the fix, the extra public properties I have found useful in dealing with the cases described above).
public enum CompressionType
{
Deflate,
GZip
}
public sealed class WebCompressionFilter : Stream
{
private readonly Stream _compSink;
private readonly Stream _finalSink;
public WebCompressionFilter(Stream stm, CompressionType comp)
{
switch(comp)
{
case CompressionType.Deflate:
_compSink = new DeflateStream((_finalSink = stm), CompressionMode.Compress);
break;
case CompressionType.GZip:
_compSink = new GZipStream((_finalSink = stm), CompressionMode.Compress);
break;
default:
throw new ArgumentException();
}
}
public Stream Sink
{
get
{
return _finalSink;
}
}
public CompressionType CompressionType
{
get
{
return _compSink is DeflateStream ? CompressionType.Deflate : CompressionType.GZip;
}
}
public override bool CanRead
{
get
{
return false;
}
}
public override bool CanSeek
{
get
{
return false;
}
}
public override bool CanWrite
{
get
{
return true;
}
}
public override long Length
{
get
{
throw new NotSupportedException();
}
}
public override long Position
{
get
{
throw new NotSupportedException();
}
set
{
throw new NotSupportedException();
}
}
public override void Flush()
{
//We do not flush the compression stream. At best this does nothing, at worse it
//loses a few bytes. We do however flush the underlying stream to send bytes down the
//wire.
_finalSink.Flush();
}
public override long Seek(long offset, SeekOrigin origin)
{
throw new NotSupportedException();
}
public override void SetLength(long value)
{
throw new NotSupportedException();
}
public override int Read(byte[] buffer, int offset, int count)
{
throw new NotSupportedException();
}
public override void Write(byte[] buffer, int offset, int count)
{
_compSink.Write(buffer, offset, count);
}
public override void WriteByte(byte value)
{
_compSink.WriteByte(value);
}
public override void Close()
{
_compSink.Close();
_finalSink.Close();
base.Close();
}
protected override void Dispose(bool disposing)
{
if(disposing)
{
_compSink.Dispose();
_finalSink.Dispose();
}
base.Dispose(disposing);
}
}
Fourth. With a content-encoding (rather than a transfer-encoding) HTTP considers you to be actually sending a different entity than with a different encoding. (Transfer-encoding considers you to just be using the encoding so that there's less bandwidth used, which is what we normally really want, but alas support for Transfer-encoding isn't as common, so we kludge by using Content-Encoding instead). As such you need to make sure that e-tags (if there are any) are different between the different encodings (adding a G for gzip and an D for default before the last " character should do the trick, just don't repeat mod-gzip's bug of putting it after the " character).
Fifth. Related to this, you must send an appropriate Vary header given that you can vary according to content-encoding. Doing this correctly means sending Vary: Accept-Encoding, to indicate that what you send will depend on the value of that heading. Because this causes issues with IE (thankfully the next version will have some improvement, according to MS) some people send Vary: User-Agent instead (on the basis that most user-agents either accept compression content-encodings or don't, rather than requesting sometimes and not others). Note that you need to set the Vary header when you are prepared to compress, even in cases where you don't.
Sixth. Even if you're doing everything perfectly, something in the cache from earlier in your development can mess with it, as you've just changed the rules for caching after it got cached. Clear your cache.
If none of those fit the bill, at least do look at what you see in a tool like Fiddler, and what you see if you manually decompress the stream sent to FF, it should definitely help.
Incidentally, your code above favours GZip over Deflate, whatever the client preference is. If I was going to ignore client-stated preference order, I'd do it the other way around. Since GZip is built on Deflate, the GZip is always slightly larger than the Deflate. This difference is negliable, but more importantly some implementations will take much more CPU-time to work with g-zip data than deflate data, and this depends on architecture as well as software (so just testing on one machine doesn't tell you enough to judge if this applies), so for a client running their browser on a low-end machine, the appreciable difference between gzip and deflate might be more than just downloading the few extra octets gzip will send.