views:

96

answers:

4

Hello,

I'm trying to use deflate/gzip streams in C# but it appears that the files after compression are bigger than before.

For example, I compress a docx file of 900ko, but it produce a 1.4Mo one !

And it does it for every file I tried.

May be I am wrong in the way I'm doing it? Here is my code :

  FileStream input = File.OpenRead(Environment.CurrentDirectory + "/file.docx");
  FileStream output = File.OpenWrite(Environment.CurrentDirectory + "/compressedfile.dat");

  GZipStream comp = new GZipStream(output, CompressionMode.Compress);

  while (input.Position != input.Length)
      comp.WriteByte((byte)input.ReadByte());

  input.Close();

  comp.Close(); // automatically call flush at closing
  output.Close();
+4  A: 

Such a big difference seems strange to me, but you should keep in mind that docx is itself compressed in ZIP, so there is no reason to compress it again, results usually are bigger.

Andrey
Confirmed: http://www.myformatfactory.com/DOCX
Dave Swersky
yes thanks, I didn't know it, and it is why it didn't work :) tried with .txt and other format and it seems better. but it still doesn't works on a home-made serialized file type ... but it doesn't matter at the end, just wanted to see how to use those compression streams :)
kite
A: 

I don't think GzipStream and DeflateStream are intended to compress files. You would probably have better luck with a file compressor like SharpZipLib.

Dave Swersky
they are made to compress and decompress. I'm currently reading MCTS 70-536 certification book and they are used like that there ^^
kite
and what are they for? http://msdn.microsoft.com/en-us/library/system.io.compression.gzipstream.aspx "GZipStream Class Provides methods and properties used to compress and decompress streams."
Andrey
They're perfectly good at compressing files and for many cases handier than zip since they work straight on the file rather than creating an archive, and you can output them straight from a webserver instead of compressing on the fly every time. Appending .gz to the name (after the original extension rather than replacing it) is common with gzip files. Not to say that SharpZipLib isn't better in a lot of cases though.
Jon Hanna
@kite: I worked at Microsoft PSS and helped develop some of the testing. If it's done in an MS certification book, it's equally likely to be a HORRIBLE way of doing things :) Having said that, there is no compressor that can make an already-compressed file smaller.
Dave Swersky
@Dave Swersky: That's a rather bold statement. One could use Huffman coding to compress a file, and then zip it to make it even smaller. Depending on how bad your first compressing technique is, a second compressen technique could make it better or worse.
Excel20
@Excel: I stand corrected. I suppose combining two different types of compression could increase the ratio overall, but I will say using ZIP twice will not work.
Dave Swersky
+3  A: 

Firstly, deflate/gzip streams are remarkably bad at compression when compared to zip, 7z, etc.

Secondly, docx (and all of the MS document formats with an 'x' at the end) are just .zip files anyway. Rename a .docx to .zip to reveal the smoke and mirrors.

So when you run deflate/gzip over a docx, it will actually make the file bigger. (Its like doing a zip with a low level of compression over a zipped file with a high level of compression.)

However if you run deflate/gzip over HTML or a text file or something that is not compressed then it will actually do a pretty good job.

DJA
yep thanks, as said in other comment didn't know that docx was already compressed. and sure 7z and other libraries are better, but just wanted to try these out to see what they were able to do
kite
+1  A: 

Although it is true, as others have indicated, that the example files you specified are already compressed - the biggest issue is to understand that unlike most compression utilities, the DeflateStream and GZipStream classes simply try to tokenize/compress a data stream without the intelligence that all the additional tokens (overhead) are actually increasing the amount of data required. Zip, 7z, etc. are smart enough to know that if data is largely random entropy (virtually uncompressable), that they simply store the data "as-is" (store, not compressed), instead of attempting to compress it further.

Michael