It's easy to wrap optional memcached caching around your existing database queries. For example:
Old (DB-only):
function getX
x = get from db
return x
end
New (DB with memcache):
function getX
x = get from memcache
if found
return x
endif
x = get from db
set x in memcache
return x
end
The thing is though, that's not always how you want to cache. For instance take the following two queries:
-- get all items (recordset)
SELECT * FROM items;
-- get one item (record)
SELECT * FROM items WHERE pkid = 42;
If I was to use the above pseudo-code to handle the caching, I would be storing all fields of item 42 twice. Once in the big record set and once on its own. Whereas I'd rather do something like this:
SELECT pkid FROM items;
and cache that index of PK's. Then cache each record individually as well.
So in summary, the data access strategy that will work best for the DB doesn't neatly fit the memcache strategy. Since I want the memcache layer to be optional (i.e. if memcache is down, the site still works) I kind of want to have the best of both worlds, but to do so, I'm pretty sure I'll need to maintain a lot of the queries in 2 different forms (1. fetch index, then records; and 2. fetch recordset in one query). It gets more complicated with pagination. With the DB you'd do LIMIT/OFFSET SQL queries, but with memcache you'd just fetch the index of PK's and then batch-get the relevant slice of the array.
I'm not sure how to neatly design this, does anyone have any suggestions?
Better yet, if you've come up against this yourself. How do you handle it?