If the application asks for a similar result set to one that has recently been requested, how might the ORM keep track of which results are stale and which can be reused from before without using too many resources (memory) or creating too much architectural complexity?
views:
78answers:
2Cache invalidation is a very tricky matter. The basic case you propose seems like something that is most easily handled by the database's query cache (frequent requests would keep the query in cache). Once the cache strategy becomes more complicated than this, most gains would come from manually managing the cache and cache expiration with a separate key-value cache store.
If this sort of thing is the norm for your application's data access and you are into trying trendy, new things, couchdb's mapreduce views might be a good fit.
Beyond basic memoization, I tend to view caching at the ORM level as a fairly finicky and poor plan.
When I need to know if the local data is in sync with the (remote) server, I keep track of the transactions.
So before "refreshing" the local data I "query the transactions history" and, if no transaction occurred on the concerned (remote) data since the last "refresh", it's still synced.
But I don't know if it's "minimizing the complexity".