tags:

views:

427

answers:

4

I want to be able to download an image (to my computer or to a web server) resize it, and upload it to S3. The piece concerned here is:

What would be a recommended way to do the downloading portion within Python (i.e., don't want to use external tools, bash, etc). I want it to be stored into memory until it's done with (versus downloading the image to a local drive, and then working with it). Any help is much appreciated.

A: 

Consider:

import urllib
f = urllib.urlopen(url_of_image)
image = f.read()

http://docs.python.org/library/urllib.html

foosion
A: 

Pycurl, urllib, and urllib2 are all options. Pycurl is a Python interface for libcurl, and urllib and urllib2 are both part of Python's standard library. urllib is simple, urllib2 is more powerful but also more complicated.

urllib example:

import urllib
image = urllib.URLopener()
image.urlretrieve("http://sstatic.net/so/img/logo.png")

In this case, the file is not stored in local memory, but rather as a temp file with a generated name.

cpharmston
+2  A: 

urllib (simple but a bit rough) and urllib2 (powerful but a bit more complicated) are the recommended standard library modules for grabbing data from a URL (either to memory or to disk). For simple-enough needs, x=urllib.urlopen(theurl) will give you an object that lets you access the response headers (e.g. to find out the image's content-type) and data (as x.read()); urllib2 works similarly but lets you control proxying, user agent, coockies, https, authentication, etc, etc, much more than simple urllib does.

Alex Martelli
A: 

You could always have a look at hand.

If I remember correctly, it was written to grab cartoons from sites that don't have feeds

Flávio Amieiro