views:

406

answers:

2

I'm using Merb::Cache for storing txt/xml and have noticed that the longer I leave my merbs running the larger the amount of open tcp sockets I have open -- I believe this is causing some major performance problems.

lsof | grep 11211 | wc -l
494
merb      27206       root   71u     IPv4   13759908                 TCP localhost.localdomain:59756->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   72u     IPv4   13759969                 TCP localhost.localdomain:59779->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   73u     IPv4   13760039                 TCP localhost.localdomain:59805->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   74u     IPv4   13760052                 TCP localhost.localdomain:59810->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   75u     IPv4   13760135                 TCP localhost.localdomain:59841->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   76u     IPv4   13760823                 TCP localhost.localdomain:59866->localhost.localdomain:11211 (ESTABLISHED)
merb      27206       root   77u     IPv4   13760951                 TCP localhost.localdomain:52095->localhost.localdomain:11211 (ESTABLISHED)

etc...

my relevant code is :

    if !exists?(:memcached) then
      register(:memcached, Merb::Cache::MemcachedStore, :namespace => 'mynamespace', :servers => ['127.0.0.1:11211'])
    end

&&

    when :xml
      unless @hand_xml = Merb::Cache[:memcached].read("/hands/#{@hand.id}.xml")
        @hand_xml = display(@hand)
        Merb::Cache[:memcached].write("/hands/#{@hand.id}.xml", @hand_xml)
      end
      return @hand_xml

is this code straight out wrong or am I using the wrong version of memcache??

I have memcached 1.2.8 and have the following:

libmemcached-0.25.14.tar.gz memcached-0.13.gem

this is kind of driving me crazy..

A: 

I might be able to help, but I need to tell a story to do that. Here it is.

Once upon a time there was a cluster of 10 apache(ssl) servers configured to have exactly 100 threads each. There also was a cluster of 10 memcached servers (on the same boxes), and they all seemed to live peacefully. Both apache's and memcached's were guarded by the evil monit daemon.

Then the King installed a 11th apache(ssl) server and memcached's started to restart randomly every few hours! The King started investigating and what did he found? There was a bug in the php memcache module documentation that said that the default constructor of memcache connection object is not persistent, but apparently it was. What happened was that every php thread (and there were like 1000 of them), opened a connection to every memcached in the pool when he needed one, and it held it. There were 10*100 connections to every memcached server and it was fine, but with 11 servers it was 1100 and as 1024<1100. Maximum number of open sockets for memcached was 1024. When all the sockets were taken, the monit daemon couldn't connect, so he restarted the memcached.

Every story has to have a moral. So, what did the King do with all of this? He disabled the persistent connections and they all lived happily ever after, with number of connections on the cluster peaking at 5 (five). Those servers were serving hudge amount of data, so We couldn't have 1000 spare sockets and it was cheaper to negotiate the memcache connection on every request.

I am sorry but I don't know ruby, it looks like You had an awful amount of threads or You are caching it wrong.

Good luck!

Reef
well, I think merb::cache is a 2-fold problem -- I only have 1 memcached server running right now and we have 5 merbs (handling 1 connection each, afaik) -- it's persistent for sure but I think it's opening more than it should to begin with... for instance we should only have something like 5 threads max right??
feydr
+1  A: 

k I figured out some stuff..

1) it CAN be reasonable to have hundreds/thousands of sockets connected to memcached assuming you are using a library that utilizes epoll or something else -- however, if you are using ruby like me I'm not aware of a lib that utilizes something else than select() or poll() -- therefore this strikes this question/want out immediately

2) if you are like me you only have 1 memcached server running right now and a couple of mongrels/thins running around taking care of requests..therefore your memcache connections should prob. be no more than the number of mongrels/thins you have running (assuming you only caching 1 or two sets of things) -- which was my case

here's the fix:

setup memcache through memcached gem rather than merb::cache (which actually wraps whatever memcache lib you are using

MMCACHE = Memcached.new("localhost:11211")

get/set your values:

  @cache = MMCACHE.clone
  begin
    @hand_xml = @cache.get("/hands/#{@hand.id}.xml")
  rescue
    @hand_xml = display(@hand)
    @cache.set("/hands/#{@hand.id}.xml", @hand_xml)
  end
  @cache.quit

sit back and drink a cold one cause now when you do this:

lsof | grep 11211 | wc -l

you see something like 2 or 3 instead of 2036!

props to reef for cluing me in that it's not uncommon for memcache connections to be persistent to begin with

feydr
I'm glad I could help, mate
Reef