views:

611

answers:

2

I use the following code to stream large files from the Internet into a local file:

fp = open(file, 'wb')
req = urllib2.urlopen(url)
for line in req:
    fp.write(line)
fp.close()

This works but it downloads quite slowly. Is there a faster way? (The files are large so I don't want to keep them in memory.)

A: 

I used to use mechanize module and its Browser.retrieve() method. In the past it took 100% CPU and downloaded things very slowly, but some recent release fixed this bug and works very quickly.

Example:

import mechanize
browser = mechanize.Browser()
browser.retrieve('http://www.kernel.org/pub/linux/kernel/v2.6/testing/linux-2.6.32-rc1.tar.bz2', 'Downloads/my-new-kernel.tar.bz2')

Mechanize is based on urllib2, so urllib2 can also have similar method... but I can't find any now.

liori
+5  A: 

No reason to work line by line (small chunks AND requires Python to find the line ends for you!-), just chunk it up in bigger chunks, e.g.:

req = urllib2.urlopen(url)
CHUNK = 16 * 1024
with open(file, 'wb') as fp:
  while True:
    chunk = req.read(CHUNK)
    if not chunk: break
    fp.write(chunk)

experiment a bit with various CHUNK sizes to find the "sweet spot" for your requirements.

Alex Martelli
thanks Alex - looks like that was my problem because most of the lines were only a few hundred bytes.
Plumo