views:

39

answers:

1

I'm using the following code to compress a small (~4kB) HTML file in C#.

byte[] fileBuffer = ReadFully(inFile, ResponsePacket.maxResponsePayloadLength); // Read the entire requested HTML file into a memory buffer
inFile.Close();                                                                 // Close the requested HTML file

byte[] payload;
using (MemoryStream compMS = new MemoryStream())                                       // Create a new memory stream to hold the compressed HTML data
{
    using (GZipStream gzip = new GZipStream(compMS, CompressionMode.Compress))            // Create a new GZip object pointing to the empty memory stream
    {
        gzip.Write(fileBuffer, 0, fileBuffer.Length);                                   // Compress the file buffer and write it to the empty memory stream
        gzip.Close();                                                                   // Close the GZip object
    }
    payload = compMS.GetBuffer();                                            // Write the compressed file buffer data in the memory stream to a byte buffer
}

The resulting compressed data is about 2k, but about half of it is just zeroes. This is for a very bandwidth sensitive application (which is why I'm bothering to compress 4kB in the first place), so the extra 1kB of zeroes is wasted valuable space. My best guess would be that the compression algorithm is padding out the data to a block boundary. If so, is there any way to override this behavior or change the block size? I get the same results with vanilla .NET GZipStream and zlib's GZipStream, as well as DeflateStream. Thanks.

A: 

Wrong MemoryStream method. GetBuffer() returns the underlying buffer, it is always larger (or exactly as large) as the data in the stream. Very efficient because no copy needs to be made.

But you need the ToArray() method here. Or use the Length property.

Hans Passant
D'oh. Good call, Hans, thanks a million.
Kongress