If webserver can send gzip response, why can't browser sent gzip request?
Why should it? The requests are usually so small that compression would
- probably enlarge it and
- take more time than it takes to transfer the uncompressed request.
A client can't know in advance that a server would understand a gzipped request, but the server can know that the client will accept one.
Because it doesn't know that the server can accept it. An HTTP transaction has a single request sent by the client followed by a response. One of the things the client sends is what encoding/compression it can support. The server can then decide how to compress the response. The client does not have this luxury.
The client and server have to agree on how to communicate; part of this is whether the communication can be compressed. Http was designed as a request/response model, and the original creation was almost certainly envisioned to always have small requests and potentially large responses. Compression is not required to implement HTTP, there are both servers and clients that don't support it.
Http compression is implemented by the client saying it can support compression, and if the server sees this in the request and it supports compression it can compress the response. To compress the request the client would have to have a "pre-request" that actually negotiated that the request would be made compressed OR it would have to require compression as a supported encoding for ALL requests.
OPTIONS * HTTP/1.1 Host: www.w3c.org
HTTP/1.1 200 OK Date: Fri, 10 Apr 2009 07:23:05 GMT Server: Apache/2 Vary: Host Allow: GET,HEAD,POST,OPTIONS,TRACE Content-Length: 0 Connection: close Content-Type: application/x-httpd-cgi
Is this ture ? "If server responce to OPTIONS request contains Accept-Encoding: gzip, client may safely upload request in gzip format."
It could, provided it could guarantee that the server would accept it. This might mean using an OPTIONS request.
There are a lot of things that web browsers could do (for example, pipelining) that they don't do. Web browser developers consider the compatibility implications of a change.
In a heterogeneous environment, there are a lot of different web servers and configurations. Making a change to the way a client works could break some of them.
Perhaps only 1% of servers might accept gzipped requests, but perhaps some of those advertise that they do, but cannot correctly accept it - so users would be denied from uploading files to those sites.
Historically there have been a lot of broken client / server implementations - for a long time, gzipped responses were broken in major web browsers (thankfully those are now mostly gone).
So you'd end up with blacklists of user-agents or servers (or domain names) where those options were automatically turned off, which is nasty.
If you're writing a web application, I'm assuming that you're in control of what is sent to the client and what is sent back from the client.
It would be easy enough to write a gzip implementation in javascript, that compresses the post data being sent to the server. The server could have a filter (j2ee term), that knows client data is sent compressed, this filter decompresses the data and then passes the data to the servlet (or action classes in Struts) that read the data as normal e.g. request.getParameter(...).
This seems perfectly logical and do-able if you're in control. As other posts mention, you couldn't rely on the browser to do this automatically, but since you're writing the web pages, you can get the browser to do the compression you're after (with a little work).
Andy.