Im running a video link aggregator, and I have a script that checks to see if the video was deleted from site. Its done by getting the HTML output of the link, and checking against target keywords.
Currently I use file_get_contents() to get the html code of the link. The problem is, some sites redirect to another URL if the link is removed.
Using curl solves the problem... but is it gonna use more server resources? I run the checker script every 10 minutes, and it checks 1000 links (there are 300,000 links in the DB).
The code that I wanna use is as follows:
$Curl_Session = curl_init('http://www.domain.com');
curl_setopt ($Curl_Session, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt ($Curl_Session, CURLOPT_RETURNTRANSFER, 1);
$output = curl_exec ($Curl_Session);
curl_close ($Curl_Session);