Its so happening that my actual data is 1/4 that of the HTTP request header size in bytes.
Is there a way to cut down on the size on the HTTP headers or any other relevant way to deal with this situation?
I am sending data from a mobile device over GPRS to the server and dont want to be burdened with huge request packets that will eat into my $$ and also bandwidth.
views:
195answers:
4Well, what's taking up the bulk of your headers? For example, Stack Overflow recently moved most of the static content to another domain so that the SO cookies wouldn't be included in requests for the static content (which wouldn't use the cookies anyway).
If, however, most of the headers are just things the browser will always send (user agent etc) then there's not a lot you can do.
I've never had to optimize site performance by chopping off headers. That said, most of the issues had to do with:
- Large number of unwanted GET requests. This was often due to the server not sending the appropriate expiry and caching headers back to the client. Sometimes it is a poorly written application.
- Large number of TCP connections being opened. Performance improves when you are able to keep the connection alive and reuse it to serve multiple requests. I'm unsure on whether mobile clients support keep alive.
- Usage of compression, or the lack of it. If there is anything that can cut down on expenses, it is the usage of compression. However, I'm not so sure of mobile clients being able to support compression. By the way, one normally does compression for responses, and not for requests (all browsers that I know of, never compress the request although the HTTP spec allows for it).
If you still need better performance after #3, your application needs some form of performance design review.
I consider the headers as "Architecture", that is: "their exact content vary from application to application according to the requirements".
Once you have the exact current list, using the links provided in this post,
you can see which ones you need, and avoid sending the others.
Who knows if it makes a significant difference, but at least you can rest assured that you made your best on that topic.
Well, this may prove unpopular and/or not actually answer your question but have you given any thoughts to your data granularity?
Once you've reduced your HTTP headers as much as you can, I suspect you'll still want to reduce the header/data ratio some more. The obvious way to do that is to send/receive more than one item of data in each http request.
An added layer of logic on either the client or the server side (or a change to your data model) would allow you to request data in bigger chunks, based on measuring what other data you are likely to need when you request a single item.
The whole point would be to transfer more data in each request in order to reduce the number of requests. The waste of bandwith (and client storage) - coming from transfering data you will not actually need - could end up being more acceptable than the HTTP header footprint.