views:

328

answers:

2

I have written a css server which does minimization and basic parsing/var replacement. The server is using node.js.

I am wanting to gzip my response from this server. As told in IRC, node.js does not currently have a gzip lib, so I am attempting to do it manually from the command line (as I am only gzipping when not in cache).

I am pushing the file data out to a temp file and then using exec to call 'gzip -c -9 -q ' + tempFile. I get the compressed data back correctly (it seems), and send the proper Content-Encoding header as 'gzip', but Chrome reports:

Error 330 (net::ERR_CONTENT_DECODING_FAILED): Unknown error.

Also some independant gzip testers online fail as well (not just Chrome).

I'm assuming this is something simple I do not know about generating gzip blocks for browsers, seeing as I have never tried to do it manually.

Any assistance would be helpful. The server is blazing fast, but I need to gzip the content to get best performance for end users.

Thanks.

UPDATE I have verified my Content-Length is correct

+1  A: 

Have you updated the Content-Length to match the gzipped size? It seems like that might screw up the decoding.

Sionide21
The content is being gzipped _before_ it's sent to the browser. How would the browser not have the correct length? However, I will try that. I'm desperate. :)
Spot
His point is that if your server indicated the pre-compression length as the header, you'd have an illegal HTTP response.You can easily examine the headers with Fiddler or similar.
EricLaw -MSFT-
Ok that did not fix it unfortunately.
Spot
It was another bug blocking the result, but this did actually end up being the problem. Long story I will not bore you with, but thank you for the assistance.
Spot
+2  A: 

Node is still bleeding edge and seems not yet to have a good handling of binary data.

Node's string encodings are ascii, binary and utf8. [...] "binary" only look[s] at the first 8 bits of the 16bit JavaScript string characters. The problem is that strings according to ECMA are 16bit character strings. If you use UTF-8 (it's the default) there is some normalization when reading into the string, and this corrupts gzip. If you use ascii, it obviously won't work.

It will work if you use binary encoding both reading and writing. The upper 8 bits of a Javascript string character just are not being used. If not, try to send the files directly to client without any loading into Javascript strings, perhaps with the help of a proxy server in front of Node.

I myself hope that Google's V8 engine implements a true binary string datatype, something like this proposal http://groups.google.com/group/nodejs/browse_thread/thread/648a0f5ed2c95211/ef89acfe538931a1?lnk=gst&q=binary+type#ef89acfe538931a1

CommonJS is also proposing Binary/B, and since Node tries to follow CommonJS, there is some hope for the future.

Edit I just discovered the net2 branch of node which contains a binary buffer (see src/node_buffer.h). It is part of a complete overhaul of network it seems.

nalply
My problem actually turned out to be a content-length issue, but was blocked by another devious issue. However you are right, V8 needs to get this stuff implemented, and quickly! :) Thanks for the response.
Spot
Node merged the net2 branch. You can use binary data with Buffer in buffer.js.
nalply