tags:

views:

61

answers:

2

For those who know wget, it has a option --spider, which allows one to check whether a link is broke or not, without actually downloading the webpage. I would like to do the same thing in Python. My problem is that I have a list of 100'000 links I want to check, at most once a day, and at least once a week. In any case this will generate a lot of unnecessary traffic.

As far as I understand from the urllib2.urlopen() documentation, it does not download the page but only the meta-information. Is this correct? Or is there some other way to do this in a nice manner?

Best,
Troels

A: 

Not sure how to do this in python but generally you could check 'Response Header' and check 'Status-Code' for code 200. at that point you could stop reading the page and continue with your next link that way you don't have to download the whole page just the 'Response Header' List of Status Codes

Greg
Why this have been down voted, please explain your reasoning? I know that this not use the Head request but it accomplished the same thing.
Greg
+7  A: 

You should use the HEAD Request for this, it asks the webserver for the headers without the body. See http://stackoverflow.com/questions/107405/how-do-you-send-a-head-http-request-in-python

THC4k
Right, HEAD will get you the headers (including HTTP status) without downloading the body of the message. Some sites are (mis)configured to send 'not found'/404 pages with a status of 200, though, so it would be hard to detect those situations.
Alex JL
As far as I can tell this is what wget --spider does.
Kathy Van Stone
Thanks a lot for the solution as well as the thoughts on misconfigured sites (that is worth keeping in mind!) - that is just what I need :)
Troels