I used the example on installing nutch from their wiki. I was able to crawl multiple pages pulled from dmoz easily. But is there a configuration that can be done to crawl external links it finds on a page, or write those external links to a file to be crawled next?
What is the best way to follow links on a page to index that page as well with nutch? If I were executing the bin/nutch via python, could I get back all the external links it found, and create a new crawl list to run again? What would you do?