views:

83

answers:

3

At the Velocity 2010 conference, Google said that header compression can yield big gains:

Hölzle noted a glaring inefficiency in the handling of web page headers, which provide information about a user’s IP address, browser and other session data. The average web page makes 44 calls to different resources, with many of those requests including repetitive header data. Holzle said compressing the headers produces an 88 percent page load improvement for some leading sites.

How does one ensure that the response headers sent by the web server are compressed? Is this even possible with today's technology?

+5  A: 

Having HTTP request headers or response headers compressed is not HTTP 1.1 standards compliant.

That being said here is some analysis into how such a scheme could be made:

1) Maybe they mean you can accomplish that using some other custom http scheme, like say httpc://.

I could also make a claim that sending requests and responses to/from the same server in batches of 5 increases the speed of the web as well. I call this scheme httpBrian://.

2) If you assume they mean only HTTP response headers, in the request headers you could have another header which specifies that you want the response in as a non compliant HTTP response. I imagine this would have problems with proxies and etc. though.

3) If you assume they mean only PARTIAL HTTP response headers, then the HTTP server could put the non proxy headers that are not immediately important except to the http client performing the request compressed into another header. The HTTP request would enable such a feature. This is most likely what they are trying to accomplish.

Brian R. Bondy
I hereby claim `httpMatt://` as batches of > 5. I believe I read a blog or article on [header compression] and Google is modifying their Chrome browser and server software to send compressed headers for testing purposes. Thus far it's merely been a case study.
Matt S
@Matt: Damn if only I would of thought of batches >5 first, httpMatt:// will forever go down in history instead of httpBrian:// :(
Brian R. Bondy
A: 

Read the paragraph more thoroughly! Hölzle talks about web page headers not http headers. So we talk about something like meta tag and so on.

Ok, it seems, even though I've been downvoted (correctly) a lot, I am the first to find the correct sources. It's about a new application layer protocol named SPDY (SPeeDY get it?) by Google, which offers HTTP header compression.

ablaeul
"The average web page makes 44 calls to different resources, with many of those requests including repetitive header data. Holzle said compressing the headers produces an 88 percent page load improvement for some leading sites."
Brian R. Bondy
The meta tags contain user agent and session information? (There's no IP address HTTP or meta header at all AFAIK even for proxied requests, so that's just creative journalism.)
Rup
What? Things like IP address, session data and user agent are part of the HTTP headers, not the HTML headers. This stuff certainly is repeated a great deal across multiple requests.... it's a valid concern, especially for a company like Google that probably outputs more GB of HTTP headers in a day than many big web sites output in content!
Warren
Well it means http headers...you can't get "user’s IP address, browser and other session data" from meta tags.Meta tags will be compressed when using normal HTTP compression tools
Ed B
Hölzle is clearly talking about HTTP headers. I take your "Read the paragraph more thoroughly" and raise you a "*Understand* the paragraph more thoroughly"
Rob Levine
@ablaeul: In addition to what I said above, what you said really doesn't make sense as you said it because you can already have content-encoding compression, and why would you want partial content-encoding compression for html pages?
Brian R. Bondy
@ablaeul: Retracted downvote, nice find.
Brian R. Bondy
A: 

If the infrastructure supports header compression through some type of custom transport protocol, then it would be compressed all the way until it was handed off to an entity that didn't support that feature.

In the end, perhaps even our browsers would support it. So I think they're taking a proactive approach by starting it on the server end and seeing how far it goes.

Marcus Adams