views:

78

answers:

2

If the application asks for a similar result set to one that has recently been requested, how might the ORM keep track of which results are stale and which can be reused from before without using too many resources (memory) or creating too much architectural complexity?

+2  A: 

Cache invalidation is a very tricky matter. The basic case you propose seems like something that is most easily handled by the database's query cache (frequent requests would keep the query in cache). Once the cache strategy becomes more complicated than this, most gains would come from manually managing the cache and cache expiration with a separate key-value cache store.

If this sort of thing is the norm for your application's data access and you are into trying trendy, new things, couchdb's mapreduce views might be a good fit.

Beyond basic memoization, I tend to view caching at the ORM level as a fairly finicky and poor plan.

Ben Hughes
I was directly referring to a cache outside the ORM level like memcache (I do web dev), but any key value store should work. Essentially, doing ORM caching automatically would require a lot of duplication of the things the database should do while adding architecture and bugs. In most cases, performance would be better gained through explicit application level caching and better db indexing, etc.
Ben Hughes
+1  A: 

When I need to know if the local data is in sync with the (remote) server, I keep track of the transactions.

So before "refreshing" the local data I "query the transactions history" and, if no transaction occurred on the concerned (remote) data since the last "refresh", it's still synced.

But I don't know if it's "minimizing the complexity".

dugres