How can I open a web-page and receive its cookies using PHP?
The motivation: I am trying to use feed43 to create an RSS feed from the non-RSS-enabled HighLearn website (remote learning website). I found the web-page that contains the feed contents I need to parse, however, it requires to login first.
Luckily, logging in can be done via ...
I'm the developer of twittertrend.net, I was wondering if there was a faster way to get headers of a URL, besides doing curl_multi? I process over 250 URLs a minute, and I need a really fast way to do this from a PHP standpoint. Either a bash script could be used and then output the headers or C appliation, anything that could be faster?...
Hi All,
I am going to have a daemon that will run on a FreeBSD server, which will exchange small amounts of data with a list of URIs every minute.
I am thinking to use the curl_multi functions to run them all at once, or in groups, every minute, using a post. I am open to other ideas though.
I will have to do some benchmarking late...
Hello all,
I have a site that has a simple API which can be used via http. I wish to make use of the API and submit data about 1000-1500 times at one time. Here is their API: http://api.jum.name/
I have constructed the URL to make a submission but now I am wondering what is the best way to make these 1000-1500 API GET requests? Here is...
Is there any sane way to make a HTTP request asynchronously in PHP without throwing out the response? I.e., something similar to AJAX - the PHP script initiates the request, does it's own thing and later, when the response is received, a callback function/method or another script handles the response.
One approach has crossed my mind - ...
If i want to do multiple things (that require cookies) with curl, for example:
login to (my own) blog
then automatically submit a blog post
do I do this in one curl instance before curl close or do i close first session and then start second one for second task? (Sorry if it's dumb question but i can't quite get it. Generally all exa...
We've written a script that pulls data from an external server. If the server goes down we don't want our server waiting for the data since we process a lot of data and we don't want it bogged down. To address this, we're trying to timeout our curl calls if they take more than a couple hundred milliseconds.
I found some documentatio...
I'm having a problem with curl_multi_*, I want to create a class / function that receives, lets say 1000 URLs, and processes all those URLs 5 at a time, so when a URL finishes downloading it will allocate the now available slot to a new URL that hasn't been processed yet.
I've seen some implementations of curl_multi, but none of them al...
my script uses curl to upload images to smugsmug site via smugsmug api.
i loop through a folder and upload every image in there. but after 3-4 uploads, curl_exec would fail, stopped everything and prevent other images from uploading.
$upload_array = array(
"method" => "smugmug.images.upload",
"SessionID" => $session_id,
"Alb...
I'm using curl_multi functions to request multiple URLs and process them as they complete. As one connection completes all I really have is the cURL handle (and associated data) from curl_multi_info_read().
The URLs come from a job queue, and once processed I need to remove the job from the queue. I don't want to rely on the URL to ide...
I recently looked into the possibility of making multiple requests with curl. I may not be understanding it fully, so I am just hoping to clarify some concepts.
It's definitely a good option if you are fetching content from multiple sources. That way, you can start processing the results from faster servers while still waiting for slowe...
It seems like I can't use shell_exec or proc_open on my shared server.
The message I get when I try to use it is:
Warning: shell_exec() has been disabled for security reasons in /home/georgee/public_html/admin/email.php on line 4
Are there any alternatives to these functions?
...
Currently I'm using file_get_contents() to submit GET data to an array of sites, but upon execution of the page I get this error:
Fatal error: Maximum execution time of 30 seconds exceeded
All I really want the script to do is start loading the webpage, and then leave. Each webpage may take up to 5 minutes to load fully, and I don't ne...
Here is my current code:
$SQL = mysql_query("SELECT url FROM urls") or die(mysql_error()); //Query the urls table
while($resultSet = mysql_fetch_array($SQL)){ //Put all the urls into one variable
// Now for some cURL to run it.
$ch = curl_init($resultSet['url']); //load the urls
curl_setopt($...
My apologies, I've actually asked this question multiple times, but never quite understood the answers.
Here is my current code:
while($resultSet = mysql_fetch_array($SQL)){
$ch = curl_init($resultSet['url'] . $fullcurl); //load the urls and send GET data
curl_setopt($ch, CURLOPT_TIMEOUT, 2); //Only lo...
Currently, my cURL multi exec stops if one url it connects to doesn't work, so a few questions:
1: Why does it stop? That doesn't make sense to me.
2: How can I make it continue?
EDIT: Here is my code:
$SQL = mysql_query("SELECT url FROM shells") ;
$mh = curl_multi_init();
$handles = array();
while($resultSet = mysql_...
My application checks a number of domains to see if they are valid (approx 100). I have the following code to check a single domain:
def self.test_url uri, limit = 10
if limit == 0
return get_error_messages("001")
end
begin
url = URI.parse(uri)
response = Net::HTTP.start(url.host, url.port).request_...
I have a data aggregator that relies on scraping several sites, and indexing their information in a way that is searchable to the user.
I need to be able to scrape a vast number of pages, daily, and I have ran into problems using simple curl requests, that are fairly slow when executed in rapid sequence for a long time (the scraper runs...
I'm using curl_multi with multi to upload files to different servers. Each server has multiple files that need uploading, so I have a curl_multi request for each server. When I execute the curl_multi handles, I just execute all the curl_multi handles in the same loop, like so:
<?php
do {
$continue_running=false;
foreach($handles as ...
Hi,
Here is my structure:
MYSQL: Table: toys ---> Column: id, URL. How do I get my PHP script to check all of those URLs to see if they are alive or have page 404's? Try not to echo or diplay the results on page. I will need to to record in MYSQL with a extra column "checks".
Results will be in this format:
http://asdasd.adas --- up ...