views:

119

answers:

3

It is common knowledge now to combine stylesheets and scripts in effort to reduce HTTP Requests. I have 2 questions:

  1. How expensive are they, really?
  2. When is a request too big it should be split?

I cannot find the answers to these two questions in all online readings I did, such as Yahoo! Best Practices which states a number of times that HTTP requests are expensive, but never cite why or how.

Thanks in advance.

A: 

I don't have an answer for how expensive HTTP request are, but it is always a good idea to reduce roundtrips between the client and server. If you have a fixed amount of data to transmit, it will always be better to do it in fewer requests.

Ned Batchelder
+2  A: 

A HTTP request requires a TCP/IP connection to be made (Think, 3-way handshaking handshaking) before it can handle the HTTP request it self

This involves at least a delay of sending the SYN message to the server and getting the SYN/ACK back (It then sends the ACK to OPEN the socket).

So, say the delay between the client and server is uniform both ways and 50ms, that results in a 100ms delay before it can send the HTTP request. It is then another 100ms before it starts getting the actual request back (Sends the request, then server replies).

Of course, you need to also take into consideration that a standard web browser limits the number of concurrent HTTP requests it is processing at the same time. If your requests have to wait, you don't get that handshake time for free (so to say), since you need to wait for another connection to finish as well. Servers play a role as well, depending on how they server the requests.

Dan McGrath
Adding to this answer, the browsers usually open only a limited number of TCP connections at once (~2..4) - so if you have more requests than that number, they will be queued.
Andrew Y
Thought to add that as soon as I hit post. Pretty important detail really.
Dan McGrath
+1  A: 
  1. Whenever a request is made, it is subjected to the harsh realities of network reliability. Two requests made in rapid succession from the same location might take entirely different routes, so with each request you're adding an element of unpredictability in terms of performance. A single consolidated request can help to mitigate that risk. @Dan McG wrote a sound point about the TCP handshake overhead.
  2. HTTP does not care about request size, as it serves as an application layer protocol on the IP (Internet Protocol Suite) stack. That is for TCP/IP to worry about, what would be of concern to the publisher is keeping document/file sizes as small as possible, and small enough that their application is performant enough.

Hope that makes sense.

karim79