views:

518

answers:

3

And if it is large...then stop the download? I don't want to download files that are larger than 12MB.

request = urllib2.Request(ep_url)
request.add_header('User-Agent',random.choice(agents))
thefile = urllib2.urlopen(request).read()
+1  A: 

you can check the content-length in a HEAD request first, but be warned, this header doesn't have to be set - see http://stackoverflow.com/questions/107405/how-do-you-send-a-head-http-request-in-python

SeriousCallersOnly
How do I check the content-length in the HEAD request? Is this considered downloading headers?
TIMEX
Doing a HEAD request is at best theoretical if you want to use urllib/urllib2. Those modules only support GET and POST requests.
Andrew Dalke
+4  A: 

You could say:

maxlength= 12*1024*1024
thefile= urllib2.urlopen(request).read(maxlength+1)
if len(thefile)==maxlength+1:
    raise ThrowToysOutOfPramException()

but then of course you've still read 12MB of unwanted data. If you want to minimise the risk of this happening you can check the HTTP Content-Length header, if present (it might not be). But to do that you need to drop down to httplib instead of the more general urllib.

u= urlparse.urlparse(ep_url)
cn= httplib.HTTPConnection(u.netloc)
cn.request('GET', u.path, headers= {'User-Agent': ua})
r= cn.getresponse()

try:
    l= int(r.getheader('Content-Length', '0'))
except ValueError:
    l= 0
if l>maxlength:
    raise IAmCrossException()

thefile= r.read(maxlength+1)
if len(thefile)==maxlength+1:
    raise IAmStillCrossException()

You can check the length before asking to get the file too, if you prefer. This is basically the same as above, except using the method 'HEAD' instead of 'GET'.

bobince
thanks a lot. ssdf
TIMEX
+5  A: 

There's no need as bobince did and drop to httplib. You can do all that with urllib directly:

>>> import urllib2
>>> f = urllib2.urlopen("http://dalkescientific.com")
>>> f.headers.items()
[('content-length', '7535'), ('accept-ranges', 'bytes'), ('server', 'Apache/2.2.14'),
 ('last-modified', 'Sun, 09 Mar 2008 00:27:43 GMT'), ('connection', 'close'),
 ('etag', '"19fa87-1d6f-447f627da7dc0"'), ('date', 'Wed, 28 Oct 2009 19:59:10 GMT'),
 ('content-type', 'text/html')]
>>> f.headers["Content-Length"]
'7535'
>>>

If you use httplib then you may have to implement redirect handling, proxy support, and the other nice things that urllib2 does for you.

Andrew Dalke