how to get all content (HTML Code) of any web page not in my server by php
+5
A:
1) Using
2) Using
Two simple methods to print out the content (HTML) of the google.com home-page:
1) Using file_get_contents()
<?php
$content = file_get_contents("http://www.google.com/");
echo '<pre>'.htmlspecialchars($content).'</pre>';
?>
If this method fails (due to URL fopen wrappers not enabled, use second method below).
2) Using cURL
:
<?php
function file_get_contents_curl($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$content = file_get_contents_curl("http://www.google.com/");
echo '<pre>'.htmlspecialchars($content).'</pre>';
?>
shamittomar
2010-08-27 11:55:51
1,2 are did not work :(
faressoft
2010-08-27 12:20:06
@faresoft, I have fixed a typo in #2, please try it now. Not working of #1 is very common issue if you're on shared server. anyways, what error do you get ?
shamittomar
2010-08-27 12:26:04
Fatal error: Call to undefined function curl_init() in C:\xampp\htdocs\fahwa\GetPage.php on line 4
faressoft
2010-08-27 14:52:37
OK. Then you need to enable `cURL` on your machine. See this link: http://www.tildemark.com/programming/php/enable-curl-with-xampp-on-windows-xp.html
shamittomar
2010-08-27 15:00:19
A:
It's quite a large lib (~40kb), but PHP Simple HTML DOM Parser should do want you want. :)
Stann0rz
2010-08-27 15:22:45