views:

120

answers:

5

I want to store the average values of some data that is occasionally generated by users, which I then use in my application to predict future data. Now the problem I have is that this data may grossly change during the day - so for example users coming in at night time may generate much lower values then users coming in during the morning, so just keeping a simple average will not give me a reasonable prediction accuracy.

Some I need to store some kind of time based average - for example a naive solution would be to store the average value for each hour of the day - so I keep 24 averages, one for all the users that generated data between 12AM and 1AM, the second for all the users that generated data between 1AM and 2AM, and so forth.

I only have a few of issues with this approach: 1. to predict data properly I'd still need to consult a few values (lets say, 2 hours ahead and 2 hours back from now) which I may not have the resources to do. I rather consult a single value if it doesn't hurt my accuracy too much. 2. I also want to have this data remembered only for recent times - if very low values where generated a couple of years ago but since last month everyone is generating high values, then for me to predict data for the near future I need to be able to respond better then what an average of all the data ever created can give me. For the sake of the argument lets say that everything older then 90 days is not really relevant. 3. The reason that I want to use an average value and not just keep all the data ever generated by the users is that I expect a lot of data - I need to store such data for each of 100K to maybe 10M data points, for millions of weekly data entries from users - at the least. I also might want to split the data even further for each data point - maybe based on some user classification.

I'd appreciate it if anyone can give me some hints on how to best calculate my average data without required a huge data storage facility :-)

[hint - yes, its for a GIS application ]

A: 

Have you looked at the formulas on calculating moving averages? There's a number of methods defined on wikipedia.

Chris J
A: 

I think a round robin database (e.g., rrdtool) would be ideally suited for your purposes. Whatever your favorite language is, there is certainly a programming API.

http://oss.oetiker.ch/rrdtool/

Best regards, Noah

Noah
So basically - store everything, let the RRD discard old data and average on that. What about providing different averages for different times of the day?
Guss
Guss, For that you would simply use rrdfetch as follows: % rrdtool fetch yourdata.rrd AVERAGE -r 900 -s start_time -e end_timeThe manpage (with examples) is here: http://oss.oetiker.ch/rrdtool/doc/rrdfetch.en.htmlBest, Noah
Noah
A: 

Why not just store all the user generated values, and then calculate exactly what you want when you want it? You can always set up an archiving script to clear out old data when you don't need it any more.

In this way you don't introduce inaccuracies by doing calculations with calculated values.

dnagirl
I can live with a bit of inaccuracy - regarding storing everything, see my comment to @McWafflestix answer.
Guss
+1  A: 

Use a view to calculate your expected values. That way, you get dynamic construction of your means, and it's simple to query.

McWafflestix
Wouldn't that mean that I have to keep every value event collected? This would be very hard I think as I expect more then 100G values per month even for small systems.
Guss
+1  A: 

It sounds like there are two important bits of information in your data set. How many days old the data is, and what hour of the day that it is.

The predicted value for a future time could be calculated as a weighted average over the data set, with weights decreasing with age, and decreasing also with how far off the hour for the predicted value it is.

Edit : if the most important thing is not hanging onto data :

Setting up bins as you propose (the naive solution) seems like the most reasonable approach. As new data comes in and is 'averaged' with the binned data, the new data can be given a larger weight to help recent changes overcome the 'inertia' of all the historical data.

Mikeb
hmm. weighted average based on the criteria. If I understand correctly, it still means I have to retain all the historical values if I want to recalculate to take into account new data, right?
Guss
I was assuming that the data points would be retained, yes; you could implement some policy that says data older than X gets dropped, archived, or otherwise moved, if you don't want it in the system, or, your weighting function sets the weights for values older than X to be zero, and they no longer contribute.
Mikeb
Well, my main concern is the size of data that I need to retain due to the large sampling pool. I'd rather not retain every single datapoint but its still a good idea, thanks.
Guss