After enabling gzip compression in my Apache server (mod_deflate) I found consistently that end user was being served on average 200 ms slower than uncompressed responses.
This was unexpected so I modified the compression directive to ONLY compress text/HTML responses, fired up wireshark and looked at the network dump before and after compression.
Here are my observations of a GET with minimum traffic in the network
Before Compression
Transactions on the wire: 46 Total time for 46 trans: 791ms i. TCP seq/ack: 14ms ii. 1st data segment: 693ms iii. Remaining: 83ms (27/28 data units transferred + tcp/ip handshakes)
After Compression
Transactions on the wire: 10 Total time for 46 trans: 926ms i. TCP seq/ack: 14ms ii. 1st data segment: 746ms iii. Remaining: 165ms (5 out of 6 data units transfered)
After the compression was set it is clear and understandable that the number of transactions on the wire are significantly lower than uncompressed.
However, the compressed data unit took much more longer time to transfer from source to destination.
It appears that the additional work of compression is understandably taking time but can not understand why each data sent was significantly slower when compressed.
My understanding of the compression process is:
1. GET Request is received by Apache 2. Apache identifies resource 3. Compress the resource 4. Respond with compressed response
With this scheme, I would assume that 3rd step is (the step before the very first segment of the response would take a longer time since we are -- compressing + responding -- but the rest of the chunks I assumed should take on average equal time as the uncompressed chunks but they are not.
Can anyone tell me why... or suggest a better way to analyze this scenario. Also does anyone have a before and after comparison... I would appreciate any feedback/comments/questions