views:

2495

answers:

3

The DB load on my site is getting really high so it is time for me to cache common queries that are being called 1000s of times an hour where the results are not changing. So for instance on my city model I do the following:

def self.fetch(id)
Rails.cache.fetch("city_#{id}") { City.find(id) }
end

def after_save; Rails.cache.delete("city_#{self.id}"); end; def after_destroy; Rails.cache.delete("city_#{self.id}"); end;

So now when I can City.find(1) the first time I hit the DB but the next 1000 times I get the result from memory. Great. But most of the calls to city are not City.find(1) but @user.city.name where Rails does not use the fetch but queries the DB again... which makes sense but not exactly what I want it to do.

I can do City.find(@user.city_id) but that is ugly.

So my question to you guys. What are the smart people doing? What is the right way to do this?

A: 

Check out cached_model

Bill
memoization just wraps the Rails.cache. I do not think it will help model associations like you are looking for.
Bill
Memoization doesn't wrap the Rails cache. Rails.cache is typically a cache store shared between processes (such that you actually obtain caching benefits). Memoization only occurs within the current process.
Michael
+1  A: 

I would go ahead and take a look at Memoization, which is now in Rails 2.2.

"Memoization is a pattern of initializing a method once and then stashing its value away for repeat use."

There was a great Railscast episode on it recently that should get you up and running nicely.

Quick code sample from the Railscast:

class Product < ActiveRecord::Base
  extend ActiveSupport::Memoizable

  belongs_to :category

  def filesize(num = 1)
    # some expensive operation
    sleep 2
    12345789 * num
  end
  memoize :filesize
end

More on Memoization

mwilliams
What is the benefit of memoization over Rails.cache and how does that solve the problem of User.city.name where Rails does a find on City instead of using cached data?
Memoization will only store the result on a per server process basis and does not invalidate, as such it's relatively inappropriate for your situation and isn't intended to be used in this way.
Michael
+5  A: 

With respect to the caching, a couple of minor points:

It's worth using slash for separation of object type and id, which is rails convention. Even better, ActiveRecord models provide the cacke_key instance method which will provide a unique identifier of table name and id, "cities/13" etc.

One minor correction to your after_save filter. Since you have the data on hand, you might as well write it back to the cache as opposed to delete it. That's saving you a single trip to the database ;)

def after_save
  Rails.cache.write(cache_key,self)
end

As to the root of the question, if you're continuously pulling @user.city.name, there are two real choices:

  • Denormalize the user's city name to the user row. @user.city_name (keep the city_id foreign key). This value should be written to at save time.

-or-

  • Implement your User.fetch method to eager load the city. Only do this if the contents of the city row never change (i.e. name etc.), otherwise you can potentially open up a can of worms with respect to cache invalidation.

Personal opinion: Implement basic id based fetch methods (or use a plugin) to integrate with memcached, and denormalize the city name to the user's row.

I'm personally not a huge fan of cached model style plugins, I've never seen one that's saved a significant amount of development time that I haven't grown out of in a hurry.

If you're getting way too many database queries it's definitely worth checking out eager loading (through :include) if you haven't already. That should be the first step for reducing the quantity of database queries.

Michael