views:

40

answers:

3

I implemented a REST service and i'm using a web page as client. My page has some javascript functions that performs several times the same http get request to REST server and process the replies.

My problem is that the browser caches the first reply and not actualy sends the following requests..

Is there some way to force the browser execute all the requests without caching? I'm using internet explorer 8.0

Thanks

+6  A: 

Not really. This is a known issue with IE, the classic solution is to append a random parameter at the end of the query string for every request. Most JS libraries do this natively if you ask them to (jQuery's cache:false AJAX option, for instance)

Victor Nicollet
+6  A: 

Not sure if it can help you, but sometimes, I add a random parameter in the URL of my request in order to avoid being cached.

So instead of having:

http://my-server:8080/myApp/foo?bar=baz

I will use:

http://my-server:8080/myApp/foo?bar=baz&random=123456789

of course, the value of the random is different for every request. You can use the current time in milliseconds for that.

romaintaz
+2  A: 

Well, of course you don't actually want to disable the browser cache entirely; correct caching is a key part of REST and the fact that it can (if properly followed by both client and server) allow for a high degree of caching while also giving fine control over the cache expiry and revalidation is one of the key advantages.

There is though an issue, as you have spotted, with subsequent GETs to the same URI from the same document (as in DOM document lifetime, reload the page and you'll get another go at that XMLHttpRequest request). Pretty much IE seems to treat it as it would a request for more than one copy of the same image or other related resource in a web page; it uses the cached version even if the entity isn't cacheable.

Firefox has the opposite problem, and will send a subsequent request even when caching information says that it shouldn't!

We could add a random or time-stamped bogus parameter at the end of a query string for each request. However, this is a bit like screaming "THIS IS SPARTA!" and kicking our hard-won download into a deep pit that no Health & Safety inspector considered putting a safety rail around. We obviously don't want to repeat a full unconditional request when we don't need to.

However, this behaviour has a time component. If we delay the subsequent request by a second, then IE will re-request when appropriate while Firefox will honour the max-age and expires headers and not re-request when needless.

Hence, if two requests could be within a second of each other (either we know they are called from the same function, or there's the chance of two events triggering it in close succession) using setTimeout to delay the second request by a second after the first has completed will make it use the cache correctly, rather than in the two different sorts of incorrect behaviour.

Of course, a second's delay is a second's delay. This could be a big deal or not, depending primarily on the size of the downloaded entity.

Another possibility is that something that changes so rapidly shouldn't be modelled as GETting the state of a resource at all, but as POSTing a request for a current status to a resource. This does smell heavily of abusing REST and POSTing what should really be a GET though.

Which can mean that on balance the THIS IS SPARTA approach of appending random stuff to query strings is the way to go. It depends, really.

Jon Hanna