views:

28

answers:

3

I want a list of urls from where my crawler can start crawling efficiently so that it can cover a maximum part of web. Do you have any other idea to create initial index for different host. Thanks you

+1  A: 

Results from another search engine for keywords from the problem domain you're trying to explore maybe?

tdammers
A: 

IMO it doesn't really matter - as long as those URLs link to various parts of the web, you can be reasonably sure your crawler will crawl most non-dark (i.e. linked to) pages on the Web, sooner or later (probably later, given the size of the Web).

I'd suggest some site's front-page, which has many links leading out to many different places on the web (hint hint), and go from there.

The problem you'll have won't be a lack of links, wherever you start - quite contrary, you'll have the exact opposite and will need to implement an algorithm to keep track of where you've been, where you should go next, and how to avoid semi-infinite and infinite loops.

Piskvor
Thanks you for your reply. I know that it will crawl that pages sooner or later but its good if i have a major portion of pages crawled in advance. How's that, if i have a text list of all the registered domains and indexes for most. Can you suggest me a link from where i can get a updated list of registered domains.I know:http://www.who.is/whois_index/index.php
Kuri
Whoa...that's a big list. Well, I'd say that's a pretty good starting point. (I don't know how up-to-date this is)
Piskvor
Yeah that a very big list, anyway thanks for your time
Kuri
+2  A: 
  • http://www.dmoz.org is a good seed.
  • As said before, to orient a crawl, querying a search engine gives good results.
Scharron