views:

255

answers:

3

I want to be able to download a page and all of its associated resources (images, style sheets, script files, etc) using Python. I am (somewhat) familiar with urllib2 and know how to download individual urls, but before I go and start hacking at BeautifulSoup + urllib2 I wanted to be sure that there wasn't already a Python equivalent to "wget --page-requisites http://www.google.com".

Specifically I am interested in gathering statistical information about how long it takes to download an entire web page, including all resources.

Thanks Mark

A: 

Websucker? See http://effbot.org/zone/websucker.htm

RichieHindle
+1  A: 

Please see this stackoverflow question on the same topic.

Reef
A: 

websucker.py doesn't import css links. HTTrack.com is not python, it's C/C++, but it's a good, maintained, utility for downloading a website for offline browsing.

http://www.mail-archive.com/[email protected]/msg13523.html [issue1124] Webchecker not parsing css "@import url"

Guido> This is essentially unsupported and unmaintaned example code. Feel free to submit a patch though!

jamshid