views:

294

answers:

2

Hello All ...

I have been looking into different systems for creating a fast cache in a web-farm running Python/mod_wsgi. Memcache and others are options ... But I was wondering:

Because I don't need to share data across machines, wanting each machine to maintain a local cache ...

Does Python or WSGI provide a mechanism for Python native shared data in Apache such that the data persists and is available to all threads/processes until the server is restarted? This way I could just keep a cache of objects with concurrency control in the memory space of all running application instances?

If not, it sure would be useful

Thanks!

+1  A: 

There's Django's thread-safe in-memory cache back-end, see here. It's cPickle-based, and although it's designed for use with Django, it has minimal dependencies on the rest of Django and you could easily refactor it to remove these. Obviously each process would get its own cache, shared between its threads; If you want a cache shared by all processes on the same machine, you could just use this cache in its own process with an IPC interface of your choice (domain sockets, say) or use memcached locally, or, if you might ever want persistence across restarts, something like Tokyo Cabinet with a Python interface like this.

Vinay Sajip
+1 ... Looks good, isn't cPickle supposed to be quite slow?
Aiden Bell
It's probably easy to get up and running, so you can benchmark it and see if it meets your needs. Obviously, if it's all Python-to-Python in the same process, then you needn't bother serialising entries at all - it then just acts like a big dict with a cache expiration policy.
Vinay Sajip
Ah, yea just a server-wide dict is what Im after really
Aiden Bell
+2  A: 

This is thoroughly covered by the Sharing and Global Data section of the mod_wsgi documentation. The short answer is: No, not unless you run everything in one process, but that's not an ideal solution.

It should be noted that caching is ridiculously easy to do with Beaker middleware, which supports multiple backends including memcache.

Hao Lian
+1 ... Looks interesting, will take a peek
Aiden Bell