I have a project that calculates a number of 'statistics' about a users performance and then shows it to them. All these statistics ultimately come from a large table of 'interactions' that record the users interaction with the site. At the moment, all these statistics are calculated by looking at this data. We make extensive use of persistent caching to keep this things going fast.
We're considering moving to an 'iterative design' where statistic values are stored in the db, and upon logging every interaction, we update the values depending on what that interactions contribution to each score is, so we're essentially iteratively updating the values. (right now we just dirty the cache).
I see some trouble with the iterative design because it means we have these redundant, potentially out of sync information stored in our database, it makes adding new statistics difficult, and means more work on every interaction log. The benefits though are that it simplifies statistic lookups to single db hit!
Something in this iterative design raises alarms bells for me, but I can't deny the potential time saving benefits. Should I obey this gut feeling, or go ahead and do it?