For example, when doing Analytics, there can be a map/reduce run that takes 10 seconds. After it is run, if other webpages can make use of that result, then it will be saving 10 seconds per page.
It will be good to have the map/reduce result cached somehow.
It is possible to record a sucessful map/reduce run as map_reduce_result_[timestamp]
in the db, and then keep this timestamp in db.run_log
in MongoDB. This timestamp is the UNIX epoch time, for example. So when other pages need to access the result, they can get the max timestamp, and then just look up that result stored in MongoDB. But doing so is a little bit like a hack and wonder if there are better ways to do it.