views:

248

answers:

1

I am developing an application that is, to put it simply, a niche based search engine. Within the application I have include a function crawl() which crawls a website and then uses the collectData() function to store the correct data from the site in the "products" table as described in the function. The visited pages are stored in a database

The crawler works pretty well, just as described except for two things: Timeout and Memory. I've managed to correct the timeout error but the memory remains. I know simply increasing the memory_limit is not actually fixing the problem.

The function is run by visiting "EXAMPLE.COM/products/crawl".

Is a memory leak inevitable with a PHP Web crawler? OR is there something I'm doing wrong/not doing.

Thanks in advance. (CODE BELOW)

function crawl() {

        $this->_crawl('http://www.example.com/','http://www.example.com');  
    }

    /***
    *
    * This function finds all link in $start and collects 
    * data from them as well as recursively crawling them
    *
    * @ param $start, the webpage where the crawler starts
    *
    * @ param $domain, the domain in which to stay
    *
    ***/

    function _crawl($start, $domain) {
        $dom = new DOMDocument();
        @$dom->loadHTMLFile($start);

        $xpath = new DOMXPath($dom);
        $hrefs = $xpath->evaluate("/html/body//a");//get all <a> elements

        for ($i = 0; $i < $hrefs->length; $i++) {

            $href = $hrefs->item($i);
            $url = $href->getAttribute('href'); // get href value
            if(!(strpos($url, 'http') !== false)) {  //check for relative links
                $url = $domain . '/' . $url;
            }

            if($this->Page->find('count', array('conditions' => array('Page.url' => $url))) < 1 && (strpos($url, $domain) !== false)) { // if this link has not already been crawled ( exists in database)

                $this->Page->create();
                $this->Page->set('url',$url);
                $this->Page->set('indexed',date('Y-m-d H:i:s'));
                $this->Page->save(); // add this url to database

                $this->_collectData($url); //collect this links data
                $this->_crawl($url, $domain); //crawl this link
            }
        }
    }
+1  A: 

You're creating upwards of twice as many database queries as there are links on the page, I'd say that's where your problem is. Try to just accumulate the links into an array, do one big batch-query to filter out the duplicates and insert new records with a saveAll().


Actually, looking at it again, you're recursively crawling all links as well, but without any depth limit or abort condition. In other words, the script will continue as long as there are links to follow, which is potentially infinite. You should just process one page at a time and crawl further links in another instance, for example using a queue/worker pattern.

deceze
Thanks for your feedback. Any tips on implementing this? The concept is relatively straight forward but I'm unsure on creating a separate instance. Would I, for example, have to call EXAMPLE.COM/products/crawl from within the script to run a separate instance?
KThompson
No, you'd rather work with cron jobs or a daemon. There are many threads here on SO to get you started: http://stackoverflow.com/search?q=php+queue+worker
deceze