views:

127

answers:

7

I have an intermediate knowledge in python. if i have to write a web crawler in python, what things should i follow and where should i begin. is there any specific tut? any advice would be of much help.. thanks

+2  A: 

Why not look for existing code that already does what you need? If you need to build one yourself, it's still worth looking at existing code and de-constructing it to figure out how it works.

gotgenes
its just that im still a learner and i'll understand certain things only if i get the basics, the right way. thanks for your help though, im looking at the code now :)
The Learner
+3  A: 

You will surely need an html parsing library. For this you can use BeautifulSoup. You can find lots of samples and tutorials for fetching urls and processing the returned html in the offical page: http://www.crummy.com/software/BeautifulSoup/

Giljed Jowes
thanks.. :)to start with, im interested to know what libraries/modules should i import? along with this one? my objective is to write a simple crawler (without multi threading, if that counts)
The Learner
BeautifulSoup is pretty easy to work with. "from BeautifulSoup import BeautifulSoup; soup = BeautifulSoup("""<html>...</html>""").
Tim McNamara
+1  A: 

Another good library you might need is for parsing feeds. Now that you have BeautifulSoup for urls, you can use Feedparser for the feeds. http://www.feedparser.org/

Giljed Jowes
Welcome to Stackoverflow (SO). Next time, just edit your existing answer with the new information :)
Kyle Rozendo
+3  A: 

I strongly recommend taking a look at Scrapy. The library can work with BeautifulSoup, or any of your preferred HTML parser. I personally use it with lxml.html.

Out of the box, you receive several things for free:

  • Concurrent requests, thanks to Twisted
  • CrawlSpider objects recursively look for links in the whole site
  • Great separation of data extraction & processing, which makes the most of the parallel processing capabilities
Tim McNamara
+1 for recommending scrapy
Uku Loskit
+2  A: 

If you still want to write one from scratch, you'll want to use the mechanize module. It includes everything you need to simulate a browser, and automate the fetching of urls. I'll be redundant and also say BeautifulSoup for parsing any html you fetch. Otherwise, I'd go with Scrapy...

razzmataz
+1  A: 

It depends on your needs. If you need basic webscraping, then mechanize + BeautifulSoup will make it.

If you need javascript to be rendered, then I would go for Selenium, or spynner. Both are great.

monstru0
+2  A: 

IBM Developer Works has an article on this https://www.ibm.com/developerworks/linux/library/l-spider/#N101C6. You'll likely want to use the libraries that others have suggested, but this will give you an overall idea of the flow.

Ryan Ische