views:

179

answers:

3

Hi everyone,

I have thought a bit on how to make sure that a particular key is distributed to ALL memcached servers in a pool.

My current, untested solution is to make another instance of memcached, something like this:

$cluster[] = array('host' => '192.168.1.1', 'port' => '11211', 'weight' => 50);

$this->tempMemcached = new Memcached;
$this->tempMemcached->addServers($cluster);

foreach ($this->cluster() as $cluster) {    

    $this->tempMemcached->setByKey($cluster, $key, $value, $this->compress, $expireTime);

}

$this->tempMemcache->close();

What is common sense to do in this case, when certain keys need to be stored on ALL servers for reliability?

+3  A: 

I think you are missing the point of Memcached. It's not for reliable data storage. It's for VERY fast access to cached data. If you want redundancy, try a NOSQL database like MongoDB...

Besides, creating several connections is going to be bad for performance and redundancy (the more connections it needs to make, the greater chance something will go wrong, and the more it needs to do for each request).

Simplify, don't complicate...

ircmaxell
Hi! I know that, but keys that need to be stored on all servers are rare and thereby the number of connections could be "reasonable", I think. I am sure though that there's a better way to do this!
Industrial
+2  A: 

I think you are not using memcached the way it was designed for. Have a look at the FAQ: you should only store a single copy of your item, which, according to the hashing algorithm used, would put your item on a certain node.

Now if you want to have an item available in all of the nodes, the only way to do this is by iterating, exactly as you are doing at the moment.

I hope your code does handle the cases when this item isn't found in the cache.

mindas
Hi Mindas, thanks for your answer!
Industrial
@Industrial: This is the best you're going to get. As others have said, this is not what memcached is meant to do, so it wouldn't make sense for the creators to implement some kind of replication mechanism. Iteration is the only way.
ryeguy
+1  A: 

We've had to 'spread' the load of a key that gets hit VERY frequently across memcache servers. We do this by simply appending a random number 0-N where N is a multiple of the number of instances you've got configured. If you miss, you go to the source (database, whatever). I say a multiple since memcache hashes against your key, you could get a bucketing collision without knowing.

This can cause an up-front cost of more reads from the source (proportional to your multiple of N), but accomplishes spreading the load of one key across all the instances, keeping those boxes happy.

But yeah, this isn't for redundancy, its for load balancing.

Justin