views:

86

answers:

2

I'm refactoring some older SQL, which is struggling after 4 years and 1.7m rows of data. Is there a way to improve the following MS SQL Query:

SELECT     ServiceGetDayRange_1.[Display Start Date], 
SUM (CASE WHEN Calls.line_date BETWEEN [Start Date] AND [End Date] THEN 1 ELSE 0 END) AS PerDayCount
FROM         dbo.ServiceGetDayRange(GETUTCDATE(), 30, @standardBias, @daylightBias, @DST_startMonth, @DST_endMonth, @DST_startWeek, @DST_endWeek, @DST_startHour, @DST_endHour, @DST_startDayNumber, @DST_endDayNumber) AS ServiceGetDayRange_1 CROSS JOIN
                      (select [line_date] from dbo.l_log where dbo.l_log.line_date > dateadd(day,-31,GETUTCDATE())) as Calls
GROUP BY ServiceGetDayRange_1.[Display Start Date], ServiceGetDayRange_1.[Display End Date]
ORDER BY [Display Start Date]

It counts log entries over the previous 30 days (ServiceGetDayRange function returns table detailing ranges, TZ aligned) for plotting on a chart.. useless information, but i'm not the client.

The execution plan states 99% of the exec time is used in counting the entries.. as you would expect. Very little overhead in working out the TZ offsets (remember max 30 rows).

Stupidly i thought 'ah, indexed view' but then realised i cant bind to a function.

Current exec time if 6.25 seconds. Any improvement on that +rep

Thanks in advance.

+3  A: 

Is it faster if you turn the CASE into a WHERE?

SELECT     ServiceGetDayRange_1.[Display Start Date], COUNT(*) AS PerDayCount
FROM       dbo.ServiceGetDayRange(GETUTCDATE(), 30, @standardBias, @daylightBias, @DST_startMonth, @DST_endMonth, @DST_startWeek, @DST_endWeek, @DST_startHour, @DST_endHour, @DST_startDayNumber, @DST_endDayNumber) AS ServiceGetDayRange_1 CROSS JOIN
                      (select [line_date] from dbo.l_log where dbo.l_log.line_date > dateadd(day,-31,GETUTCDATE())) as Calls
WHERE Calls.line_date BETWEEN [Start Date] AND [End Date]
GROUP BY ServiceGetDayRange_1.[Display Start Date], ServiceGetDayRange_1.[Display End Date]
ORDER BY [Display Start Date]
Scott Stafford
omg... its a whole 4.5 seconds faster... i actually discounted that as i assumed it owuld count the date-range (i.e always be 30) you sir, are a gent.
Sean.C
Glad to hear it. At a guess, I'd think that for a CASE it would have to visit every row and evaluate the expression in an unoptimizable way. The COUNT(*)/WHERE has the opportunity to make better use of indices, maybe...
Scott Stafford
A: 

6.25 seconds for nearly 2m rows is pretty good.. maybe try a count of valid rows (your 1/0 conditional should allow that) as opposed to a sum of values.. I think that's more efficient in oracle environments.

glasnt
i actually also thought 6.25 seconds wasn't bad all things concidered, almost left it at that - pleased i didn;t. I now know that case test based counts are not always better.
Sean.C