tags:

views:

586

answers:

3

Since nothing so far is working I started a new project with

python scrapy-ctl.py startproject Nu

I followed the tutorial exactly, and created the folders, and a new spider

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from Nu.items import NuItem
from urls import u

class NuSpider(CrawlSpider):
    domain_name = "wcase"
    start_urls = ['http://www.whitecase.com/aabbas/']

    names = hxs.select('//td[@class="altRow"][1]/a/@href').re('/.a\w+')

    u = names.pop()

    rules = (Rule(SgmlLinkExtractor(allow=(u, )), callback='parse_item'),)

    def parse(self, response):
        self.log('Hi, this is an item page! %s' % response.url)

        hxs = HtmlXPathSelector(response)
        item = Item()
        item['school'] = hxs.select('//td[@class="mainColumnTDa"]').re('(?<=(JD,\s))(.*?)(\d+)')
        return item

SPIDER = NuSpider()

and when I run

C:\Python26\Scripts\Nu>python scrapy-ctl.py crawl wcase

I get

[Nu] ERROR: Could not find spider for domain: wcase

The other spiders at least are recognized by Scrapy, this one is not. What am I doing wrong?

Thanks for your help!

A: 

Have you included the spider in SPIDER_MODULES list in your scrapy_settings.py?

It's not written in the tutorial anywhere that you should to this, but you do have to.

This is included when project is created:SPIDER_MODULES = ['Nu.spiders']But I don't know if I need to add the domain_name = 'wcase' as well?the spider is now running but it is just scanning the initial url, it doesn't go to allowed links. See my other question http://stackoverflow.com/questions/1809817/scrapy-sgmllinkextractor-question
Zeynel
A: 

I believe you have syntax errors there. The name = hxs... will not work because you don't get defined before the hxs object.

Try running python yourproject/spiders/domain.py to get syntax errors.

Rho
A: 

These two lines look like they're causing trouble:

u = names.pop()

rules = (Rule(SgmlLinkExtractor(allow=(u, )), callback='parse_item'),)
  • Only one rule will be followed each time the script is run. Consider creating a rule for each URL.
  • You haven't created a parse_item callback, which means that the rule does nothing. The only callback you've defined is parse, which changes the default behaviour of the spider.

Also, here are some things that will be worth looking into.

  • CrawlSpider doesn't like having its default parse method overloaded. Search for parse_start_url in the documentation or the docstrings. You'll see that this is the preferred way to override the default parse method for your starting URLs.
  • NuSpider.hxs is called before it's defined.
Tim McNamara