views:

72

answers:

2

Hi all.

I've been working on a site that uses jQuery heavily and loads in content via AJAX like so:

$('#newPageWrapper').load(newPath + ' .pageWrapper', function() {
    //on load logic
}

It has now come to my attention that Google won't index any dynamically loaded content via Javascript and so I've been looking for a solution to the problem.

I've read through Google's Making AJAX Applications Crawlable document what seems like 100 times and I still don't understand how to implement it (due in the most part to my limited knowledge of servers).

So my first question would be:

  • Is there a decent step-by-step tutorial out there that documents this from start to finish that you know of? I've tried to Google it and I'm not finding anything useful.

And secondly, if there isn't anything out there yet, would anyone be able to explain:

  1. How to 'Set up my server to handle requests for URLs that contain _escaped_fragment_'

  2. How to implement HtmlUnit on my server to create an 'HTML snapshot' of the page to show to the crawler.

I would be incredibly grateful if someone could shed some light on this for me, thanks in advance!

-Ben

+2  A: 

The best solution is to make a site that works with and without JavaScript. Read articles on Progressive enhancement.

epascarello
A: 

I couldn't find an alternative so I took epascarello's advice and now I'm generating the content with php if the URL includes '_escaped_fragment_' (the URL will include that if a crawler visits)

For those searching:

<?php

    if(isset($_GET['_escaped_fragment_'])){

        $newID = $_GET['_escaped_fragment_'];

        //Generate page here
    }

?>
bbeckford