views:

215

answers:

4

One SEO advice we got was to move all javascript to external files, so the code could be removed from the text. For fixed scripts this is not a problem, but some scripts need to be generated as they depend on some ClientId that is generated by asp.net. Can I use the ScriptManager (from asp.net Ajax or from Telerik) to send this script to the browser or do I need to write my own component for that?

I found only ways to combine fixed files and/or embedded resources (also fixed).

+1  A: 

Spiderbots do not read JavaScript blocks. This advice is plain wrong.

Diodeus
The reason given was to improve the content-to-code ratio: removing the text from the html file saves the spider from having to read and ignore it.
Hans Kesting
JavaScript is not counted as code, just HTML.
Diodeus
+1  A: 

Some javascript can break W3C validators (and possibly cause issues with some spiderbots) You can reduce this by placing this code around your javascript:

< !-- no script

... your javascript code and functions ...

// -->

Note: remove the space between "<" and "!" as it seems to comment out the example here :-)

Mark Redman
I disagree, so do many others:http://stackoverflow.com/questions/204813/does-it-still-make-sense-to-use-html-comments-on-blocks-of-javascript
Diodeus
Which part do you disagree with? On the basis I have used this to "fix" W3C validation, it is correct. On that basis I am saying it may also fix parsing by other spiderbots, in this case SEO related bots? If you dont agree with that then I guess it confirms my doubt (ie when I said "possibly")
Mark Redman
Note, this code helps with W3C Validation when you have some html code written out by Javascript (eg when showing content when flash is not available), in this case the html is sometimes escaped to construct a string in javascript which breaks the validation as the validation "bot" thinks its seeing bad html. This is what I am refering to.
Mark Redman
SEO has nothing to do with validation. If you want to see what the BOTs see, use Lynx (http://en.wikipedia.org/wiki/Lynx_%28web_browser%29). Bots do not see code written by JavaScript because they don't execute JavaScript, they scrape the page.
Diodeus
If a bot scrapes a page and sees html strings within javascript, it may be that the bot sees this as html. If the strings are escaped, the bot may read this an invlaid html. This is a problem with validating pages with html with javascript string. This has to do with Validation, not SEO. Agreed? On the basis that SEO uses a bot to parse html, it "may"/"possibly" have the same issue. I dont know whether having a validated page is better for SEO or not, I am just commenting on how the bot may fail parsing the html.
Mark Redman
The bot finds the <script> tag and the </script> tag and ignores everything in-between, whether the stuff in-between is HTML or not is irrelevant.
Diodeus
+2  A: 

How about registering the ClientIDs in an inline Javascript array/hash, and have your external JS file iterate through that?

James McCormack
A: 

Apparently there is no way to register generated script in the ScriptManager. But there is no need (from a SEO perspective), as the spiders should ignore the javascript code. (With maybe the exception that script that validators break on, might also break a spider. So that script could be "html-commented" to help both validator and spider).

Thanks all, for the answers.

Hans Kesting