views:

215

answers:

3

Is it possible to find all the pages and links on ANY given website? I'd like to enter a URL and produce a directory tree of all links from that site?

I've looked at HTTrack but that downloads the whole site and I simply need the directory tree.

Thanks

Jonathan

+1  A: 

Check out linkchecker—it will crawl the site (while obeying robots.txt) and generate a report. From there, you can script up a solution for creating the directory tree.

Hank Gay
thank you so much Hank! Perfect - exactly what I needed. Very much appreciated.
Jonathan Lyon
A: 
  1. Chilkat Python Web Crawler.
  2. Writing a Web Crawler in Java Tutorial

*NOTE:Just Google => simple web crawler in language_name*

TheMachineCharmer
A: 

If this is a programming question, then I would suggest you write your own regular expression to parse all the retrieved contents. Target tags are IMG and A for standard HTML. For JAVA,

final String openingTags = "(<a [^>]*href=['\"]?|<img[^> ]* src=['\"]?)";

this along with Pattern and Matcher classes should detect the beginning of the tags. Add LINK tag if you also want CSS.

However, it is not as easy as you may have intially thought. Many web pages are not well-formed. Extracting all the links programmatically that human being can "recognize" is really difficult if you need to take into account all the irregular expressions.

Good luck!

mizubasho