tags:

views:

151

answers:

4

how can i determine if anything at the given url does exist in the web using python? it can be a html page or a pdf file, shouldnt be matter. ive tried the solution written in this page http://code.activestate.com/recipes/101276/ but it just returns a 1 when its a pdf file or anything.

+4  A: 

Send a HEAD request

import httplib 
connection = httplib.HTTPConnection(url) 
connection.request('HEAD', '/') 
response = connection.getresponse() 
if response.status == 200:
    print "Resource exists"
Yacoby
i get an error for this answerTraceback (most recent call last): File "/home/cad/eclipse/wsp/pyCrawler/src/pyCrawler/pyCrawler.py", line 63, in <module> print httpExists('http://sstatic.net/so/all.css?v=5912') File "/home/cad/eclipse/wsp/pyCrawler/src/pyCrawler/pyCrawler.py", line 25, in httpExists c = httplib.HTTPConnection(url) File "/usr/lib64/python2.6/httplib.py", line 656, in __init__ self._set_hostport(host, port) File "/usr/lib64/python2.6/httplib.py", line 668, in _set_hostport .....
Cad
For the record, this one does use HTTP/1.1 and so does work on Slashdot.
jleedev
+2  A: 

The httplib in that example is using HTTP/1.0 instead of 1.1, and as such Slashdot is returning a status code 301 instead of 200. I would recommend using urllib2, and also probably checking for codes 20* and 30*.

The documentation for httplib states:

It is normally not used directly — the module urllib uses it to handle URLs that use HTTP and HTTPS.

[...]

The HTTP class is retained only for backward compatibility with 1.5.2. It should not be used in new code. Refer to the online docstrings for usage.

So yes. urllib is the way to open URLs in Python — an HTTP/1.0 client won't get very far on modern web servers.

(Also, a PDF link works for me.)

jleedev
It has nothing to do with the version of HTTP. Actually it is because `httplib.HTTP` skipped the host field in the request.You can test by: $ telnet slashdot.org 80 HEAD / HTTP/1.0 Host: slashdot.org
Iamamac
What I really meant is that a 1.0 request isn't *required* to include the host header.
jleedev
A: 

This solution returns 1 because server is sending 200 OK response.

There's something wrong with your server. It should return 404 if the file doesn't exist.

myfreeweb
+5  A: 

You need to check HTTP response code. Python example:

from urllib2 import urlopen
code = urlopen("http://example.com/").code

4xx and 5xx code probably mean that you cannot get anything from this URL. 4xx status codes describe client errors (like "404 Not found") and 5xx status codes describe server errors (like "500 Internal server error"):

if (code / 100 >= 4):
   print "Nothing there."

Links:

jetxee
`urlopen` sends a `GET` request, and the server will return the whole content for that URL. Personally I think use `HTTPConnection`/`HTTPSConnection` to build a `HEAD` request is better, which will save a lot net traffic.
Iamamac
Good point. I agree that `HEAD` is often better and will save traffic. So I upvoted Yacoby's answer. However, `urllib2.urlopen` is radically easy to use and saves lines of code (no need to split URL into server/path pair, for instance). `GET` costs may be acceptable in many cases.
jetxee