What you (probably) want to use is called "recursion".
Web pages are graphs. There are several algorithms for transversal of graphs; the simplest to understand is depth-first.
Say your site is laid out like this (recursion terminated):
* http://example.com/
* http://example.com/
* ...
* http://example.com/post/1/
* http://example.com/
* ...
* http://example.com/about/
* ...
* http://example.com/archives/
* ...
* http://example.com/post/2/
* http://example.com/
* ...
* http://example.com/about/
* ...
* http://example.com/archives/
* ...
* http://example.com/post/3/
* http://example.com/
* ...
* http://example.com/about/
* ...
* http://example.com/archives/
* ...
* http://example.com/about/
* http://example.com/
* ...
* http://example.com/archives/
* http://example.com/archives/
* http://example.com/
* ...
* http://example.com/about/
* ...
* http://example.com/post/1/
* http://example.com/
* ...
* http://example.com/about/
* ...
* http://example.com/archives/
* ...
* http://example.com/post/2/
* http://example.com/
* ...
* http://example.com/about/
* ...
* http://example.com/archives/
* ...
* http://example.com/post/3/
* http://example.com/
* ...
* http://example.com/about/
* ...
* http://example.com/archives/
* ...
* http://example.com/post/4/
* http://example.com/
* ...
* http://example.com/about/
* ...
* http://example.com/archives/
* ...
* http://example.com/post/5/
* http://example.com/
* ...
* http://example.com/about/
* ...
* http://example.com/archives/
* ...
When you first hit http://example.com/, you have the following links:
You need to keep track of pages you have already visited, so you can ignore them. (Otherwise, it'd take forever to spider a page ... literally.) You add to the ignore list every time you visit a page. Right now, the only entry in the ignore list is http://example.com/.
Next, you filter out the ignored links, reducing the list to:
You then run the fetcher again on each of these links. You do this by calling your function again, with the current URL and ignore list: spider($url, &$ignoredUrls)
(We use a reference to $ignoredUrls
so newly ignored items are visible to the parent spider
calls.)
Looking at http://example.com/post/1/, we see the following links:
We have already looked at http://example.com/. The next link which isn't ignored is the about page. From the about page, we go to the archives page, where we look through each post. Each post has the same set of links:
Because we already visited all of those links, we return an empty array.
Back at /archives/
, we append the /post/2/
link (the first non-ignored link in /archives/
) to a $foundLinks
local variable, as well as the return value of the call to spider
on with /post/2/
(which is an empty array). Then we move on to the second post.
When we go through all our posts, we return $foundLinks
. The /about/
page then adds those links to its own $foundLinks
, in addition to the /about/
link. Flow goes back to /post/1/
, which looks at /archives/
(which is now ignored). The /posts/1/
spider is now complete, and returns its own $foundLinks
. Eventually, the original call gets all the found links.
This method works fine for a small site which is completely closed. If you link to Wikipedia, though, you'll be spidering all day long. You can combat this problem in at least two ways:
- Terminating spidering after a certain depth (e.g. 10 links deep).
- Restricting the URL's, e.g. to a certain domain or subdomain (like
example.com
).
Here's a quick implementation of spider
(untested):
function get_urls($url) {
// curl/DOM code here
}
define('SPIDER_MAX_DEPTH', 10);
function spider_internal($url, &$ignoredUrls, $depth = 0) {
$foundUrls = array($url);
$ignoredUrls[] = $foundUrls;
if($depth >= SPIDER_MAX_DEPTH) {
return $foundUrls;
}
$links = get_links($url);
foreach($links as $link) {
if(array_search($link, $ignoredUrls) !== false) {
continue;
}
$foundUrls = array_merge($foundUrls, spider($link, $ignoredUrls, $depth + 1));
}
return $foundUrls;
}
function spider($url) {
$ignoredUrls = array();
return spider_internal($url, $ignoredUrls);
}