views:

52

answers:

5

Hey all, I have a site that looks up info for the end user, is written in Python, and requires several urlopen commands. As a result it takes a bit for a page to load. I was wondering if there was a way to make it faster? Is there an easy Python way to cache or a way to make the urlopen scripts fun last?

The urlopens access the Amazon API to get prices, so the site needs to be somewhat up to date. The only option I can think of is to make a script to make a mySQL db and run it ever now and then, but that would be a nuisance.

Thanks!

A: 

How often do the price(s) change? If they're pretty constant (say once a day, or every hour or so), just go ahead and write a cron script (or equivalent) that retrieves the values and stores it in a database or text file or whatever it is you need.

I don't know if you can check the timestamp data from the Amazon API - if they report that sort of thing.

Wayne Werner
Thanks Wayne! The only issue I have is that sometimes I get a timeout error for the urlopen. Is there a way I can ensure that this won't happen?
Jill S
A: 

There are several things you can do.

  • The urllib caching mechanism is temporarily disabled, but you could easily roll your own by storing the data you get from Amazon in memory or in a file somewhere.

  • Similarly to the above, you could have a separate script that refreshes the prices every so often, and cron it to run every half an hour (say). These could be stored wherever.

  • You could run the URL fetching in a new thread/process, since it is mostly waiting anyway.

katrielalex
+2  A: 

httplib2 understands http request caching, abstracts urllib/urllib2's messiness somewhat and has other goodies, like gzip support.

http://code.google.com/p/httplib2/

But besides using that to get the data, if the dataset is not very big, I would also implement some kind of function caching / memoizing. Example: http://wiki.python.org/moin/PythonDecoratorLibrary#Memoize

It wouldn't be too hard to modify that decorator to allow for time based expiry, e.g. only cache the result for 15 mins.

If the results are bigger, you need to start looking into memcached/redis.

Infinity
The fetched data is just an XML file, so it's not too big. It just takes long to load for some reason :(
Jill S
Then the http caching should work fine, as long as the data doesn't constantly change on the 3rd party server.
Infinity
A: 

You could use memcached. It is designed for that, and this way you could easily share the cache with different program/scripts. And it is really easy to use from Python, check:

http://stackoverflow.com/questions/868690/good-examples-of-python-memcache-memcached-being-used-in-python

Then you update the memcached when a key is not there and also from some cron script, and you're ready to go.

Another, simpler, option would be to cook you own cache, probably storing the data in a dictionary and/or using cPickle to serialize it to disk (if you want the data to be shared between different runs).

juanjux
A: 

If you need to grab from multiple sites at once you might try whit asyncore http://docs.python.org/library/asyncore.html

This way you can easily load multiple pages at once.

ralu