views:

2712

answers:

8

What is a good crawler (spider) to use against HTML and XML documents (local or web-based) and that works well in the Lucene / Solr solution space? Could be Java-based but does not have to be.

+2  A: 

I suggest you to check out Nutch to get some inspiration:

Nutch is open source web-search software. It builds on Lucene Java, adding web-specifics, such as a crawler, a link-graph database, parsers for HTML and other document formats, etc.

Luca
+6  A: 

In my opinion, this is a pretty significant hole which is keeping down the widespread adoption of Solr. The new DataImportHandler is a good first step to import structured data, but there is not a good document ingestion pipeline for Solr. Nutch does work, but the integration between Nutch crawler and Solr is somewhat clumsy.
I've tried every open-source crawler that I can find, and none of them integrates out-of-the-box with Solr.
Keep an eye on OpenPipeline and Apache Tika.

Geordie
+4  A: 

Also check Apache Droids [http://incubator.apache.org/droids/] -- this hopes not be a simple spider/crawler/worker framework.

It is new and is not yet easy to use off the shelf (it will take some tweeking to get running), but is a good thing to keep your eye on.

+2  A: 

Nutch might be your closest match, but it's not too flexible.

If you need something more you will have to pretty much hack your own crawler. It's not as bad as it sounds, every language has web libraries, so you just need to connect some task queue manager with HTTP downloader and HTML parser, it's not really that much work. You can most likely get away with a single box, as crawling is mostly bandwidth-intentive, not CPU-intensive.

taw
+5  A: 

I've tried nutch, but it was very difficult to integrate with Solr. I would take a look at Heritrix. It has an extensive plugin system to make it easy to integrate with Solr, and it is much much faster at crawling. It makes extensive use of threads to speed up the process.

John
A: 

Did anyone tried Xapian? It seams much quicker than solr and written in c++.

A: 

im looking for one solution like that. i need to index job pages like one job spider or job crawler and integrate it one my job portal site. if anyone already have this solution please drop me a line. tks. ale at ivagas dot com

alcastrobr
A: 

I developed a crawler for solr (but not only). The main goals with this crawler are :

  • be able to crawl any source types (web, databases, file system, CMS, ...). Each source type have its own "source connector". Today, we have Web source connector, File System connector and any CMS supported by EntropySoft connectors library (http://www.entropysoft.net/cms/lang/en/home/Product/connectors).

  • be able to do any thing with crawled item (web page, cms document, database record, ...). Crawled items are handled by "document handler". Today we have Solr document handler (add, update or remove documents in Solr indeces)

  • be multi-threaded (crawl several sources and documents by source at the same time)

  • be higthly configurable :

    • number of simultaneous crawled source
    • number of simultaneous items crawled by source
    • recrawl period rules based on item type
    • item type inclusion / exclusion rules -item path inclusion / exclusion rules
    • depth rule
    • ...
  • be able to extract text from items (at document handler level) with tika library (http://lucene.apache.org/tika/)

    • be compatible with both Windows and Linux
  • provide an administration and monitoring web interface (I have attached several screen shots)

  • be easily extendable (with source connectors and document handlers)

The crawler is developped in java. A MySQL database is used in order to store each crawled item reference (crawl status, last crawl time, next crawl time, mime type, ...).

Hurisearch (www.hurisearch.org) crawls 5400 web sites and near 10.000.000 pages were crawled and indexed. All sources can be crawled in 3 days.

Hurisearch uses only one linux debian dedicated server (http://www.ovh.com/fr/produits/superplan_best_of.xml) for both indexing and search.

A dedicated web sites will be available soon in order to describe this crawler. The link will be provided in this french article http://www.zoonix.fr/2010/03/07/un-crawler-web-pour-solr/

Dominique