HI,
I'm have this short spider code:
class TestSpider(CrawlSpider):
name = "test"
allowed_domains = ["google.com", "yahoo.com"]
start_urls = [
"http://google.com"
]
def parse2(self, response, i):
print "page2, i: ", i
# traceback.print_stack()
def parse(self, response):
for i in range(5):
print "page1 i : ", i
link = "http://www.google.com/search?q=" + str(i)
yield Request(link, callback=lambda r:self.parse2(r, i))
and I would expect the output like this:
page1 i : 0
page1 i : 1
page1 i : 2
page1 i : 3
page1 i : 4
page2 i : 0
page2 i : 1
page2 i : 2
page2 i : 3
page2 i : 4
, however, the actual output is this:
page1 i : 0
page1 i : 1
page1 i : 2
page1 i : 3
page1 i : 4
page2 i : 4
page2 i : 4
page2 i : 4
page2 i : 4
page2 i : 4
so, the arguemnt I pass in callback=lambda r:self.parse2(r, i)
is somehow wrong.
What's wrong with the code ?