views:

69

answers:

4

I know the question regarding PHP web page scrapers has been asked time and time and using this, I discovered SimpleHTMLDOM. After working seamlessly on my local server, I uploaded everything to my online server only to find out something wasn't working right. A quick look at the FAQ lead me to this. I'm currently using a free hosting service so edit any php.ini settings. So using the FAQ's suggestion, I tried using cURL, only to find out that this too is turned off by my hosting service. Are there any other simple solutions to scrape contents of a of another web page without the use or cURL or SimpleHTMLDOM?

+3  A: 

If cURL and allow_url_fopen are not enabled you can try to fetch the content via

  • fsockopen — Open Internet or Unix domain socket connection

In other words, you have to do HTTP Requests manually. See the example in the manual for how to do a GET Request. The returned content can then be further processed. If sockets are enabled, you can also use any third party lib utilitzing them, for instance Zend_Http_Client.

On a sidenote, check out Best Methods to Parse HTML for alternatives to SimpleHTMLDom.

Gordon
+1 didn't know you could use fsockopen even if allow_url_fopen is disallowed.
nikic
A: 

cURL is a specialty API. It's not the http library it's often made out to be, but a generic data transfer library for FTP,SFTP,SCP,HTTP PUT,SMTP,TELNET,etc. If you want to use just HTTP, there is an according PEAR library for that. Or check if your PHP version has the official http extension enabled. For scraping try phpQuery or querypath. Both come with builtin http support.

mario
I think querypath uses DOM's loading facilities and afaik those depend on `allow_url_fopen`. phpquery on the other hand uses `Zend_Http_Client` so that might be an option. The PEAR library is a good call too. It's an implementation on top of `fsockopen`.
Gordon
A: 

If you're just wanting to grab the generated HTML of a web page, then use the file_get_contents() function.

Martin Bean
The OP's host has `allow_url_fopen` disabled, so that wont work.
Gordon
A: 

file_get_contents() is the simplest method to grab a page without installing extra libraries.

ScraperWiki
That's the [same answer as Martin's above](http://stackoverflow.com/questions/3880628/how-to-scrape-websites-when-curl-and-allow-url-fopen-is-disabled/3880979#3880979). Unless your own answers do add something new, you are encouraged to upvote the original answer instead of repeating them (especially when they are not applicable for the OP's problem like in this case).
Gordon
file_get_contents() isn't an option.
Nate Shoffner