views:

81

answers:

5

Hello!

I was wondering how I could download a webpage in php for parsing?

+5  A: 

With the curl library.

David Dorward
+5  A: 

You can use something like this

$homepage = file_get_contents('http://www.example.com/');
echo $homepage;
poscaman
I'm trying to make a search website, just as a test. I'm trying to search another website using their search URL.You type something into a search box and then it parses it through Google, for instance, and then displays the results from Google.
Hugo
this one works great if you don't have curl installed
andufo
+4  A: 

Since you will likely want to parse the page with DOM, you can load the page directly with:

$dom = new DOMDocument;
$dom->load('http://www.example.com');

when your PHP has allow_url_fopen enabled.

But basically, any function that supports HTTP stream wrappers can be used to download a page.

Gordon
A: 

You can use this code

$url = 'your url';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec ($ch);
curl_close ($ch);
// you can do something with $data like explode(); or a preg match regex to get the exact information you need
//$data = strip_tags($data);
echo $data;
Deepali
A: 

Just to add another option because it is there, while not the best is just to use file. Its another option that I dont see anyone has listed here.

$array = file("http://www.stackoverflow.com");

Its nice if you want it in an array of lines, whereas the already mentioned file_get_contents will put it in a string.

Just another thing you can do.

Then you can loop thru each line if this matches your goal to do so:

foreach($array as $line){

    echo $line;
    // do other stuff here

}

This comes in handy sometimes when certain APIs spit out plain text or html with a new entry on each line.

RetroNoodle