views:

70

answers:

6

I wrote a web crawler in Python 2.6 using the Bing API that searches for certain documents and then downloads them for classification later. I've been using string methods and urllib.urlretrieve() to download results whose URL ends in .pdf, .ps etc., but I run into trouble when the document is 'hidden' behind a URL like:

http://www.oecd.org/officialdocuments/displaydocument/?cote=STD/CSTAT/WPNA(2008)25&docLanguage=En

So, two questions. Is there a way in general to tell if a URL has a pdf/doc etc. file that it's linking to if it's not doing so explicitly (e.g. www.domain.com/file.pdf)? Is there a way to get Python to snag that file?

Edit: Thanks for replies, several of which suggest downloading the file to see if it's of the correct type. Only problem is... I don't know how to do that (see question #2, above). urlretrieve(<above url>) gives only an html file with an href containing that same url.

A: 

No. It is impossible to tell what kind of resource is referenced by a URL just by looking at it. It is totally up to the server to decide what he gives you when you request a certain URL.

Space_C0wb0y
+8  A: 

There's no way to tell from the URL what it's going to give you. Even if it ends in .pdf it could still give you HTML or anything it likes.

You could do a HEAD request and look at the content-type, which, if the server isn't lying to you, will tell you if it's a PDF.

Alternatively you can download it and then work out whether what you got is a PDF.

Douglas Leeder
Thanks for the reply. For the above URL, the content-type comes back as text/html, even though it points, indirectly, to a .pdf. And downloading it gives only an html file with an href to the same URL... any ideas?
JonC
A: 

Check the mimetype with the urllib.info() function. This might not be 100% accurate, it really depends on what the site returns as a Content-Type header. If it's well behaved it'll return the proper mime type.

A PDF should return application/pdf, but that may not be the case.

Otherwise you might just have to download it and try it.

Xorlev
A: 

You can't see it from the url directly. You could try to only download the header of the HTTP response and look for the Content-Type header. However, you have to trust the server on this - it could respond with a wrong Content-Type header not matching the data provided in the body.

Femaref
+2  A: 

As has been said there is no way to tell content type from URL. But if you don't mind getting the headers for every URL you can do this:

obj = urllib.urlopen(URL)

headers = obj.info()
if headers['Content-Type'].find('pdf') != -1:
   # we have pdf file, download whole
...

This way you won't have to download each URL just it's headers. It's still not exactly saving network traffic, but you won't get better than that.

Also you should use mime-types instead of my crude find('pdf').

Stan
+2  A: 

In this case, what you refer to as "a document that's not explicitly referenced in a URL" seems to be what is known as a "redirect". Basically, the server tells you that you have to get the document at another URL. Normally, python's urllib will automatically follow these redirects, so that you end up with the right file. (and - as others have already mentioned - you can check the response's mime-type header to see if it's a pdf).

However, the server in question is doing something strange here. You request the url, and it redirects you to another url. You request the other url, and it redirects you again... to the same url! And again... And again... At some point, urllib decides that this is enough already, and will stop following the redirect, to avoid getting caught in an endless loop.

So how come you are able to get the pdf when you use your browser? Because apparently, the server will only serve the pdf if you have cookies enabled. (why? you have to ask the people responsible for the server...) If you don't have the cookie, it will just keep redirecting you forever.

(check the urllib2 and cookielib modules to get support for cookies, this tutorial might help)

At least, that is what I think is causing the problem. I haven't actually tried doing it with cookies yet. It could also be that the server is does not "want" to serve the pdf, because it detects you are not using a "normal" browser (in which case you would probably need to fiddle with the User-Agent header), but it would be a strange way of doing that. So my guess is that it is somewhere using a "session cookie", and in the case you haven't got one yet, keeps on trying to redirect.

Steven
Cookie theory is confirmed: disallowing cookies for that site in firefox for example, and then requesting that url gives a "Redirect loop" error (even suggesting it may be caused by not accepting cookies)
Steven
Very informative, thanks!
JonC