views:

45

answers:

3

Anyone has a PHP function that can grab all links inside a specific DIV on a remote site? So usage might be:

$links = grab_links($url,$divname);

And return an array I can use. Grabbing links I can figure out but not sure how to make it only do it within a specific div.

Thanks! Scott

+1  A: 

Check out PHP XPath. It will let you query a document for the contents of specific tags and so on. The example on the php site is pretty straightforward: http://php.net/manual/en/simplexmlelement.xpath.php

This following example will actually grab all of the URLs in any DIVs in a doc:

$xml = new SimpleXMLElement($docAsString);

$result = $xml->xpath('//div//a');

You can use this on well-formed HTML files, not just XML.

Good XPath reference: http://msdn.microsoft.com/en-us/library/ms256086.aspx

mlaw
+1  A: 

Hi,

In the past I have use the PHP Simple DOM library with success:

http://simplehtmldom.sourceforge.net/

Samples:

// Create DOM from URL or file
$html = file_get_html('http://www.google.com/');

// Find all images 
foreach($html->find('img') as $element) 
       echo $element->src . '<br>';

// Find all links 
foreach($html->find('a') as $element) 
       echo $element->href . '<br>';
redhatlab
+1  A: 

I found something that seems to do what I wanted.

http://www.earthinfo.org/xpaths-with-php-by-example/

<?php

$html = new DOMDocument();
@$html->loadHtmlFile('http://www.bbc.com');
$xpath = new DOMXPath( $html );
$nodelist = $xpath->query( "//div[@id='news_moreTopStories']//a/@href" );
foreach ($nodelist as $n){
echo $n->nodeValue."\n";
}

// for images

echo "<br><br>";
$html = new DOMDocument();
@$html->loadHtmlFile('http://www.bbc.com');
$xpath = new DOMXPath( $html );
$nodelist = $xpath->query( "//div[@id='promo_area']//img/@src" );
foreach ($nodelist as $n){
echo $n->nodeValue."\n";
}

?>

I also tried PHP DOM method and it seems faster...

http://w-shadow.com/blog/2009/10/20/how-to-extract-html-tags-and-their-attributes-with-php/

$html = file_get_contents('http://www.bbc.com');
//Create a new DOM document
$dom = new DOMDocument;

//Parse the HTML. The @ is used to suppress any parsing errors
//that will be thrown if the $html string isn't valid XHTML.
@$dom->loadHTML($html);

//Get all links. You could also use any other tag name here,
//like 'img' or 'table', to extract other tags.
$links = $dom->getElementById('news_moreTopStories')->getElementsByTagName('a');

//Iterate over the extracted links and display their URLs
foreach ($links as $link){
    //Extract and show the "href" attribute. 
    echo $link->getAttribute('href'), '<br>';
}
Scott
I did notice it's a bit slower than using PHP DOM.
Scott
true, xpath is a bit slow. Parsing purely by regular expressions would probably be one of the fastest things you could do.
mlaw