I know the question regarding PHP web page scrapers has been asked time and time and using this, I discovered SimpleHTMLDOM. After working seamlessly on my local server, I uploaded everything to my online server only to find out something wasn't working right. A quick look at the FAQ lead me to this. I'm currently using a free hosting service so edit any php.ini settings. So using the FAQ's suggestion, I tried using cURL, only to find out that this too is turned off by my hosting service. Are there any other simple solutions to scrape contents of a of another web page without the use or cURL or SimpleHTMLDOM?
If cURL
and allow_url_fopen
are not enabled you can try to fetch the content via
fsockopen
— Open Internet or Unix domain socket connection
In other words, you have to do HTTP Requests manually. See the example in the manual for how to do a GET Request. The returned content can then be further processed. If sockets are enabled, you can also use any third party lib utilitzing them, for instance Zend_Http_Client
.
On a sidenote, check out Best Methods to Parse HTML for alternatives to SimpleHTMLDom.
cURL is a specialty API. It's not the http library it's often made out to be, but a generic data transfer library for FTP,SFTP,SCP,HTTP PUT,SMTP,TELNET,etc. If you want to use just HTTP, there is an according PEAR library for that. Or check if your PHP version has the official http extension enabled. For scraping try phpQuery or querypath. Both come with builtin http support.
If you're just wanting to grab the generated HTML of a web page, then use the file_get_contents()
function.
file_get_contents() is the simplest method to grab a page without installing extra libraries.