views:

1007

answers:

5
+1  Q: 

jQuery DIV refresh

I'm using the simple jQuery DIV refresh code.

var refreshId = setInterval(function() { $('#refreshdash').load('dashboard.php?cache='); }, 4000);

Right? Some guy informed me that adding "?cache=" to the end of the file that you going to have refreshed, will help lower bandwidth, etc, as he told me that it caches the file or something of the sort.

I didn't believe him what so ever. Is this true? ...and if not, what does that actually do, nothing?

+1  A: 

if you put a querystring with a random string (like ?cache=) to the end of the URL, it will prevent from caching. It will force a new roundtrip to the server for every request made to that URL.

More information is available here

jao
('GET', 'yourscript.php?ms=' + new Date().getTime(), true); Makes so much sense now. Thanks.
Homework
+1  A: 

Hi,

Yes, passing a random variable (like the current timestamp + some hash) is helpfull when you want to prevent the browser from caching results .. But you must use it like "?cache=your_random_variable" (example : ?cache=abc9623498385023).

yoda
+1  A: 

Dynamic pages by default will never cached as php sends headers to make the page not be cached. You can send the appropriate headers to make the page be cached but it won't by default.

You can test this using Firebug's Net Panel. It will tell you if something was loaded from cache or not.

smack0007
+4  A: 

No, that is actually the exact opposite of what is happening.

Browsers cache content based on their URL. By adding extra query parameters to the end of a URL, you are effectively changing the location where it is fetched from, so the browser is forced to re-request the content in case it has changed. Adding a cache=x parameter on the end is a technique called cache-breaking, for this reason.

For example:

http://example.com/index?timestamp=100
http://example.com/index?timestamp=567

Both those URLs might return the same content, but they are different URLs, and thus will be cached separately.

The common cache-breaking technique is to add the current timestamp to the URL, as this will always be changing, ensuring a new URL is generated each time.

However, this will increase bandwidth, not decrease it, as the browsers will need to re-fetch your content each time.

The best use of this method is for static files that rarely change, but might be cached for a long time by proxy servers or other HTTP caches. I use this for .js and .css files. I will append the last modified time of the file onto the URL... whenever the files are updated, the URL changes and browsers know to re-fetch them.

zombat
This is very helpful. Thanks.
Homework
+1  A: 

Generally this technique is used with mostly static content.

You have your script return headers that tell the browser to cache for a long time - this lowers bandwidth because the browser will use the cached copy, instead of requesting a new one. Great for things like javascript libraries, logos, CSS files, etc.

The downside is that when you do change things, people won't see them because they've been cached. This can be even worse when you have inter dependencies - such as a new javascript widget library, that depends on a new version of your CSS file or another javascript file. If only one gets loaded, the page may not look/work properly.

One semi-solution to this is to set the expiry to a balanced time, eg a day, so that everyone will eventually request new content (at the expense of slightly increased bandwidth due to grabbing the content at least once a day). However, this doesn't solve dependency issues.

Using a random parameter (?cache=) is a great solution to this problem. Basically, the server ignores the parameter, but for the browser, different parameters means different URL. Your main site can know when the content changes, and thus change the parameter value, forcing the browser to refresh, at the instant it changes (there is no possibility for stale caches or dependency problems, assuming your code is okay).

The parameter name doesn't matter, nor does its value -- obviously you want to avoid something that will be interpreted by the server though. Popular choices for this mechansim: * md5 of the file (cache this server-side as well, as it can be expensive to calculate) * date/time of the file, or hash of the date/time * version number of the file or overall site (if you increment a version every time you deploy any new content)

Theres a blog post up about how they do caching on stackoverflow.


One other scenario I've seen is deploying to a server that by default sent headers that caused the content to cache, for example, some hosting providers used to do this (probably still do, but I haven't seen the problem personally in a decade). By setting ?cache= you can get around this. The real solution here though is to make your server not cache by default if that doesn't make sense for your use.

gregmac