views:

116

answers:

4

I'm trying to scrape and submit information to websites that heavily rely on Javascript to do most of its actions. The website won't even work when i disable Javascript in my browser.

I've searched for some solutions on Google and SO and there was someone who suggested i should reverse engineer the Javascript, but i have no idea how to do that.

So far i've been using Mechanize and it works on websites that don't require Javascript.

Is there any way to access websites that use Javascript by using urllib2 or something similar? I'm also willing to learn Javascript, if that's what it takes.

+3  A: 

Check out crowbar. I haven't had any experience with it, but I was curious about the answer to your question so I started googling around. I'd like to know if this works out for you.

http://grep.codeconsult.ch/2007/02/24/crowbar-scrape-javascript-generated-pages-via-gecko-and-rest/

orangeoctopus
+1  A: 

Maybe you could use Selenium Webdriver, which has python bindings I believe. I think it's mainly used as a tool for testing websites, but I guess it should be usable for scraping too.

Steven
+2  A: 

Hi there,

I've had exactly the same problem. It is not simple at all, but I finally found a great solution, using PyQt4.QtWebKit.

You will find the explanations on this webpage : http://blog.motane.lu/2009/07/07/downloading-a-pages-content-with-python-and-webkit/

I've tested it, I currently use it, and that's great !

Its great advantage is that it can run on a server, only using X, without a graphic environment.

Guillaume Lebourgeois
+1  A: 

I would actually suggest using Selenium. Its mainly designed for testing Web-Applications from a "user perspective however it is basically a "FireFox" driver. I've actually used it for this purpose ... although I was scapping an dynamic AJAX webpage. As long as the Javascript form has a recognizable "Anchor Text" that Selenium can "click" everything should sort itself out.

Hope that helps

JudoWill