tags:

views:

571

answers:

3

Hi,

Does every web request sent the browsers cookie?

I'm not talking page view, but a request for a image, .js file, etc.

Update If a web page has 50 elements, that is 50 requests. Why would it send the SAME cookie(s) for each request, doesn't it cache or know it already has it?

+10  A: 

Yes, as long as the URL requested is within the path defined in the cookie (and all of the other restrictions -- secure, httponly, not expired, etc) hold, then the cookie will be sent for every request.

Ian Clelland
This, incidentally, is why page speed tools like Google Page Speed or Yahoo's YSlow recommend serving static content from a separate, cookie-free domain.
ceejayoz
+1  A: 

Yes. Every request sends the cookies that belongs to the same domain.

Like: you have 4 cookies at www.stackoverflow.com. If you make a request to www.stackoverflow.com/images/logo.png it will send the cookies too.
But if you request stackoverflow.com/images/logo.png or images.stackoverflow.com/logo.png, if there are no cookies there, so no cookies will be sent.

You can read more about cookies and images requesting, for example, at this StackOverflow Blog Post.

Igoru
+7  A: 

As others have said, if the cookie's host, path, etc. restrictions are met, it'll be sent, 50 times.

But you also asked why: because cookies are an HTTP feature, and HTTP is stateless. HTTP is designed to work without the server storing any state between requests.

In fact, the server doesn't have a solid way of recognizing which user is sending a given request; there could be a thousand users behind a single web proxy (and thus IP address). If the cookies were not sent every request, the server would have no way to know which user is requesting whatever resource.

Finally, the browser has no clue if the server needs the cookies or not, it just knows the server instructed it to send the cookie for any request to foo.com, so it does so. Sometimes images need them (e.g., dynamically-generated per-user), sometimes not, but the browser can't tell.

derobert
Is this true with HTTP 1.1, which is a multiplexing scheme? I.e., the requests are bundled into a single TCP connection. Of course every request is received with a copy of the cookie attached. But if the concern is lots of transmission duplication, HTTP 1.1 is in a position to optimize. Though I don't know if it actually does...
Chris Noe
Then the issue becomes "which requests did the browser intend to attach the cookies to?" The server sets the policy with the cookie, to decide which domains, and which URL paths, the cookie should be sent back to, but then it forgets it. You'd need a way to specify that certain requests in the connection had the cookie, and others didn't. That definitely doesn't exist in HTTP/1.1, except by explicitly including them in every request. Honestly, a better (standards-compatible) solution for reducing bandwidth would be client-side gzip content-encoding, but nobody supports that yet.
Ian Clelland
@Ian Clelland: The client has to send the first message, so it doesn't know what the server would send for Accept-Encoding (were servers to send that field, HTTP/1.1 §14.3 says its a request header). And the problem is that it could vary by URL even on the same server, and can change over time, so making it work would be non-trivial.
derobert
@Chris: No, keepalive just saves TCP connection setup/teardown overhead, that's all. Full headers are still sent for every request. However, pipelining (sending multiple requests w/o waiting for the response) can help greatly. HTTP/1.1 §8.1 gives details.
derobert