views:

30

answers:

2

What's the fast method to compress Python objects (list, dictionary, string, etc) before saving them to cache and decompress after read from cache?

I'm using Django and I hope to add compress/decompress support directly in Django's cache backend which makes it available to all my Django apps.

I looked into django/core/cache/backends/memcached.py

import cmemcache as memcache

class CacheClass(BaseCache):

    def __init__(self, server, params):
        BaseCache.__init__(self, params)
        self._cache = memcache.Client(server.split(';'))

    def get(self, key, default=None):
        val = self._cache.get(smart_str(key))
        if val is None:
            return default
        return val

    def set(self, key, value, timeout=0):
        self._cache.set(smart_str(key), value, self._get_memcache_timeout(timeout))

Looks like pickle/unpickle is done by cmemcache library. I dont know where to put the compress/decompress code.

+2  A: 

Firstly - are you sure you need it? Are your data structures too big just to fit uncompressed in the cache? There is going to be an overhead for compression/decompression, that may void any gains you've made by caching in the first place.

If you really do need compression, then you probably want to use zlib.

If you are going to use zlib, you might want to experiment with the different compression levels available in the compress method, to balance CPU time vs compression levels:

zlib.compress(string[, level])
Compresses the data in string, returning a string contained compressed data. level is an integer from 1 to 9 controlling the level of compression; 1 is fastest and produces the least compression, 9 is slowest and produces the most. The default value is 6. Raises the error exception if any error occurs.

Dominic Rodger
My server is IO bound and RAM bound, not CPU bound. Current memcached allocation uses 1.3GB of RAM. So compressing the data by 50% saves 650MB RAM or make it possible to store twice more items in the cache.
jack
@jack - see my edit - good luck!
Dominic Rodger
@Dominic, thanks, I voted up your answer. But I hope to find a more generic solution which modifies in the cache backend.
jack
@jack I wonder if your best bet is a custom cache backend, that wraps around memcaches and compresses around setting and decompresses around retrieval - see http://docs.djangoproject.com/en/dev/topics/cache/#using-a-custom-cache-backend
Dominic Rodger
A: 

I looked further into python-memcache's source code.

It already supported compressing values by zlib before sending them to memcached.

lv = len(val)
# We should try to compress if min_compress_len > 0 and we could
# import zlib and this string is longer than our min threshold.
if min_compress_len and _supports_compress and lv > min_compress_len:
    comp_val = compress(val)
    # Only retain the result if the compression result is smaller
    # than the original.
    if len(comp_val) < lv:
        flags |= Client._FLAG_COMPRESSED
        val = comp_val

def _set(self, cmd, key, val, time, min_compress_len = 0):

Here is Django's implemention for the "set" command in its memcache backend:

def set(self, key, value, timeout=0):
    self._cache.set(smart_str(key), value, self._get_memcache_timeout(timeout))

Apparently it does not have "min_compress_len" parameter.

jack