views:

45

answers:

3

There seems to be a general conclusion floating around the internet that external js files are better.

The main reasons are caching, maintenance, and debugability.

However there does not seem to be much discussion on the overhead of the 304 http requests. I went to yahoo.com, and noticed that 304's for each javascript file has an overhead of around 30 ms per file (mostly connection & response overhead).

I have separate javascript files (solving the maintenance problem). I don't have much need for debugability (automated tests are very helpful).

I'm considering whether or not to package and inline them into a single script tag on top of the html document. I'm aware there is a point where this does not make sense (when my javascript is very large) and I should benchmark this.

I'm just wondering if anybody has already done benchmarks on this on what sort of results have they gotten?

+1  A: 

Sorry, I have no benchmarks. I doubt the 30ms would be worse than having larger HTML documents and having to recompile the JS every visit (assuming newer javascript engines can or will in future keep and reuse the compiled bytecode if the JS file is cached).

Also, doesn't this depend on caching policy? Most sites I develop won't even issue a request if the JS file is cached, certainly not during a browser session, it'll go direct to the cache (I know this because I occasionally get calls from clients who need to press F5 to see my changes).

Another benefit is validity, it's not trivial to have valid JavaScript embedded into a XHTML+XML document. It's much easier to have external JS files if this is a factor.

You can also distribute your JS files and serve them from a Content Distribution Network, you'll definitely forfeit this opportunity to reduce server load if you choose to inline your JavaScript into your HTML.

Lee Kowalkowski
It seems like there is always a http request (at least with Firefox), using the expires header. The server responds with a 304 if the cache is still valid.Also, wouldn't the JS need to be reinterpreted every time anyways? I think the savings of caching would only be on the data transfer.Good points on the CDN and validity.
Brian Takita
For interpretation-based engines, yes, but V8 for example compiles the JavaScript into native machine code up front. If the JS file is cached, there's no reason for the browser to recomplie it. JS inside your HTML is not as likely to be cached. I don't know if such browsers will recompile the JS regardless, but if they do today, they may not in future.
Lee Kowalkowski
+1  A: 

i also don't have benchmarks, it really depends on your connection latency also. but subjectively i never felt this latency much.

still i would recommend to split the dynamic content (html you render on server) and the static content (css, js). first of all the payload of your html gets much less (you save server render time + payload is lower) and further more it is a clean separation and from code perspective better maintainable.

if you want to avoid conditional GET (e.g. through Modified-Since or Etags headers), you can also use Expires header. Conformant browsers then don't make a http call at all.

manuel aldana
Conditional GET avoidance seems good. Also, one could version the javascript files to expire the cache.http://stackoverflow.com/questions/2320500/forcing-cache-expiration-from-a-javascript-file
Brian Takita
A: 

I had a similar question a while ago, and decided to ask @getify and @zoompf, two front-end performance experts.

When is it acceptable to use inline <script> elements? When is it better to use separate .js files? The same question can be asked about inline vs. linked CSS — where do you draw the line?

See http://mathiasbynens.be/notes/inline-vs-separate-file for their responses.

Mathias Bynens