views:

4370

answers:

9

I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size.

The easy way to get the files is use "wget -r -np -A files url". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb/sec.

But, just to play around I was also using python to build my urlparser.

Downloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb/sec. I use HTMLParser for parsing the href tags.

I am not sure why is this happening. Are there any settings for this.

Thanks

+1  A: 

Maybe you can wget and then inspect the data in Python?

Aiden Bell
sorry, I can't understand what you meant.. are you saying to call wget from python code?
You could do, or from the shell, taking advantage of the fast download speed ... then process data using Python.
Aiden Bell
A: 

There shouldn't be a difference really. All urlretrieve does is make a simple HTTP GET request. Have you taken out your data processing code and done a straight throughput comparison of wget vs. pure python?

Corey Goldberg
A: 

Please show us some code. I'm pretty sure that it has to be with the code and not on urlretrieve.

I've worked with it in the past and never had any speed related issues.

miya
Here is the code...http://cs.jhu.edu/~kapild/MyHtmlParser1.pyThe urlretrieve is in the second last line
+1  A: 
import subprocess

myurl = 'http://some_server/data/'
subprocess.call(["wget", "-r", "-np", "-A", "files", myurl])
nosklo
subprocess is your friend:-)
DoxaLogos
+10  A: 

Probably a unit math error on your part.

Just noticing that 500KB/s (kilobytes) is equal to 4Mb/s (megabits).

Triptych
+1 very good point
nosklo
+3  A: 

As for the html parsing, the fastest/easiest you will probably get is using lxml As for the http requests themselves: httplib2 is very easy to use, and could possibly speed up downloads because it supports http 1.1 keep-alive connections and gzip compression. There is also pycURL which claims to be very fast (but more difficult to use), and is build on curllib, but I've never used that.

You could also try to download different files concurrently, but also keep in mind that trying to optimize your download times too far may be not very polite towards the website in question.

Sorry for the lack of hyperlinks, but SO tells me "sorry, new users can only post a maximum of one hyperlink"

Added some links for ya, newb :)
Triptych
I am not sure if parsing is the problem ... Its retrieving and storing file which is causing the delay...
+3  A: 

Transfer speeds can be easily misleading.. Could you try with the following script, which simply downloads the same URL with both wget and urllib.urlretrieve - run it a few times incase you're behind a proxy which caches the URL on the second attempt.

For small files, wget will take slightly longer due to the external process' startup time, but for larger files that should be come irrelevant.

from time import time
import urllib
import subprocess

target = "http://example.com" # change this to a more useful URL

wget_start = time()

proc = subprocess.Popen(["wget", target])
proc.communicate()

wget_end = time()


url_start = time()
urllib.urlretrieve(target)
url_end = time()

print "wget -> %s" % (wget_end - wget_start)
print "urllib.urlretrieve -> %s"  % (url_end - url_start)
dbr
+1  A: 

urllib works for me as fast as wget. try this code. it shows the progress in percentage just as wget.

import sys, urllib
def reporthook(a,b,c): 
    # ',' at the end of the line is important!
    print "% 3.1f%% of %d bytes\r" % (min(100, float(a * b) / c * 100), c),
    #you can also use sys.stdout.write
    #sys.stdout.write("\r% 3.1f%% of %d bytes" 
    #                 % (min(100, float(a * b) / c * 100), c)
    sys.stdout.flush()
for url in sys.argv[1:]:
     i = url.rfind('/')
     file = url[i+1:]
     print url, "->", file
     urllib.urlretrieve(url, file, reporthook)
Xuan
A: 

You can use wget -k to engage relative links in all urls.

Alex