If you are going to have a high read/write ratio for this data you might want to consider an indexed view. I have used this approach all over the place to aggregate by buckets of time. I just got around to blogging the example, here is the code:
create table timeSeries (
timeSeriesId int identity primary key clustered
,updateDate datetime not null
,payload float not null
)
insert timeSeries values ('2009-06-16 12:00:00', rand())
insert timeSeries values ('2009-06-16 12:00:59', rand())
insert timeSeries values ('2009-06-16 12:01:00', rand())
insert timeSeries values ('2009-06-16 12:59:00', rand())
insert timeSeries values ('2009-06-16 01:00:00', rand())
insert timeSeries values ('2009-06-16 1:30:00', rand())
insert timeSeries values ('2009-06-16 23:59:00', rand())
insert timeSeries values ('2009-06-17 00:01:00', rand())
insert timeSeries values ('2009-06-17 00:01:30', rand())
create view timeSeriesByMinute_IV with schemabinding as
select
dayBucket = datediff(day, 0, updateDate)
,minuteBucket = datediff(minute, 0, (updateDate - datediff(day, 0, updateDate)))
,payloadSum = sum(payLoad)
,numRows = count_big(*)
from dbo.timeSeries
group by
datediff(day, 0, updateDate)
,datediff(minute, 0, (updateDate - datediff(day, 0, updateDate)))
go
create unique clustered index CU_timeSeriesByMinute_IV on timeSeriesByMinute_IV (dayBucket, minuteBucket)
go
create view timeSeriesByMinute as
select
dayBucket
,minuteBucket
,payloadSum
,numRows
,payloadAvg = payloadSum / numRows
from dbo.timeSeriesByMinute_IV with (noexpand)
go
declare @timeLookup datetime, @dayBucket int, @minuteBucket int
select
@timeLookup = '2009-06-16 12:00:00'
,@dayBucket = datediff(day, 0, @timeLookup)
,@minuteBucket = datediff(minute, 0, (@timeLookup - datediff(day, 0, @timeLookup)))
select * from timeSeriesByMinute where dayBucket = @dayBucket and minuteBucket = @minuteBucket
You can see the example lookup at the end of the code block. Clearly you can define ranges to query across instead of just seeking to a particular dayBucket/minuteBucket pair.