views:

195

answers:

4

Its so happening that my actual data is 1/4 that of the HTTP request header size in bytes.
Is there a way to cut down on the size on the HTTP headers or any other relevant way to deal with this situation?
I am sending data from a mobile device over GPRS to the server and dont want to be burdened with huge request packets that will eat into my $$ and also bandwidth.

+4  A: 

Well, what's taking up the bulk of your headers? For example, Stack Overflow recently moved most of the static content to another domain so that the SO cookies wouldn't be included in requests for the static content (which wouldn't use the cookies anyway).

If, however, most of the headers are just things the browser will always send (user agent etc) then there's not a lot you can do.

Jon Skeet
I had asked a question here (http://stackoverflow.com/questions/1378476/http-get-request-packet-size-in-bytes/1378496#1378496)regarding HTTP request header size. And that shocked me!My data would certainly be much lower that the figures mentioned at that post
Kevin Boyd
@Kevin, there is nothing shocking there. Most of the request headers in that tcpdump extract are required by the server, to decide how the response is to be prepared.
Vineet Reynolds
@Vineet, In there any way out of this? or is the header baggage inevitable.
Kevin Boyd
@Kevin, it is inevitable. I haven't taken a look at your app. But from the previous question that you asked at SO, you should consider using Fiddler to see what is wrong. I wont be surprised if there are too many GET requests, in which case the solution is not to reduce header size, but to reduce the number of such GETs and also remove any information that is not required to be sent.
Vineet Reynolds
@Kevin, the neXpert performance optimization tool can be installed with Fiddler, to give you a quick report of what is wrong. I call it the smoke test for performance. You'll find more details at the neXpert blog @http://blogs.msdn.com/nexpert/
Vineet Reynolds
+3  A: 

I've never had to optimize site performance by chopping off headers. That said, most of the issues had to do with:

  1. Large number of unwanted GET requests. This was often due to the server not sending the appropriate expiry and caching headers back to the client. Sometimes it is a poorly written application.
  2. Large number of TCP connections being opened. Performance improves when you are able to keep the connection alive and reuse it to serve multiple requests. I'm unsure on whether mobile clients support keep alive.
  3. Usage of compression, or the lack of it. If there is anything that can cut down on expenses, it is the usage of compression. However, I'm not so sure of mobile clients being able to support compression. By the way, one normally does compression for responses, and not for requests (all browsers that I know of, never compress the request although the HTTP spec allows for it).

If you still need better performance after #3, your application needs some form of performance design review.

Vineet Reynolds
+2  A: 

I consider the headers as "Architecture", that is: "their exact content vary from application to application according to the requirements".

Once you have the exact current list, using the links provided in this post,
you can see which ones you need, and avoid sending the others.

Who knows if it makes a significant difference, but at least you can rest assured that you made your best on that topic.

KLE
+2  A: 

Well, this may prove unpopular and/or not actually answer your question but have you given any thoughts to your data granularity?

Once you've reduced your HTTP headers as much as you can, I suspect you'll still want to reduce the header/data ratio some more. The obvious way to do that is to send/receive more than one item of data in each http request.

An added layer of logic on either the client or the server side (or a change to your data model) would allow you to request data in bigger chunks, based on measuring what other data you are likely to need when you request a single item.

The whole point would be to transfer more data in each request in order to reduce the number of requests. The waste of bandwith (and client storage) - coming from transfering data you will not actually need - could end up being more acceptable than the HTTP header footprint.

QuickRecipesOnSymbianOS
I did not consider data granularity thanks for bringing it to my attention. But really a point well made!
Kevin Boyd