views:

265

answers:

2

I have a stored procedure that uses a view to pull 6 averages. The SQL database is SQL Server 2000. When I run it in the Query analyzer, it takes roughly 9 seconds. What can I do to get better performance? Should I return the rows using LINQ and determine an average that way? Will it be faster?

Here's an example of my current sproc:

create procedure [TestAvg]
(
    @CustomerNumber int
)
as

select
(select AVG(OrderTime) from OrderDetails where ProductID = 12 and DateDiff(day, DateFulfilled, GetDate()) <= 7 and CustomerNumber = @CustomerNumber) as P12D7,
(select AVG(OrderTime) from OrderDetails where ProductID = 12 and DateDiff(day, DateFulfilled, GetDate()) <= 30 and CustomerNumber = @CustomerNumber) as P12D30,
(select AVG(OrderTime) from OrderDetails where ProductID = 12 and DateDiff(day, DateFulfilled, GetDate()) <= 90 and CustomerNumber = @CustomerNumber) as P12D90,
(select AVG(OrderTime) from OrderDetails where ProductID = 16 and DateDiff(day, DateFulfilled, GetDate()) <= 7 and CustomerNumber = @CustomerNumber) as P16D7,
(select AVG(OrderTime) from OrderDetails where ProductID = 16 and DateDiff(day, DateFulfilled, GetDate()) <= 30 and CustomerNumber = @CustomerNumber) as P16D30,
(select AVG(OrderTime) from OrderDetails where ProductID = 16 and DateDiff(day, DateFulfilled, GetDate()) <= 90 and CustomerNumber = @CustomerNumber) as P16D90

Also, let me clarify the view mentioned above. Since this is SQL Server 2000, I cannot use an indexed view because it does use a subquery. I suppose this can be rewritten to use joins. However, the last time we took a query and rewrote it to use joins, data was missing (because the subquery can return a null value which would omit the entire row).

+1  A: 

I would recomend getting the data into a table var first, maybe 2 table vars, 1 for 12 and 1 for 16 ProductID. From these table vars, calculate the avgs as required, and then return tose from the sp.

DECLARE @OrderDetails12 TABLE(
     DateFulfilled DATETIME,
     OrderTime FLOAT
)

INSERT INTO @OrderDetails12
SELECT  DateFulfilled,
     OrderTime
FROM    OrderDetails
WHERE   ProductID = 12
AND  DateDiff(day, DateFulfilled, GetDate()) <= 90
and CustomerNumber = @CustomerNumber

DECLARE @OrderDetails16 TABLE(
     DateFulfilled DATETIME,
     OrderTime FLOAT
)

INSERT INTO @OrderDetails16
SELECT  DateFulfilled,
     OrderTime
FROM    OrderDetails
WHERE   ProductID = 16
AND  DateDiff(day, DateFulfilled, GetDate()) <= 90
and CustomerNumber = @CustomerNumber

Also, creating the correct indexes on the table, will help a lot.

astander
Down to 2 seconds. Thanks! Forgot about trying that since I was basically running a query on the entire dataset 3 times for each product. Regarding the indexing, what do you mean setup the correct indexes?
Jason N. Gaylord
Remember, this is a view.
Jason N. Gaylord
+2  A: 

What would the amount of data leaving the database server be if it was unaggregated, and how long to do that operation? The difference in the size of the data will guide whether the calculation time on the server is outweighed by the transfer time and local calculation.

Also - look at that DATEDIFF usage and change it to be easier to make it optimizable (try DateFullfilled >= SomeCalculatedDate1 instead of DATEDIFF) - review your execution plan to ensure that it is able to use an index seek (best) or index scan (good) instead of a table_scan.

Also, ensure there is an index on CustomerNumber, ProduceID, DateFulfilled.

Cade Roux
It would vary between 0 and 15000 rows I'd imagine.
Jason N. Gaylord
I would aggregate on the server (and optimize the ability to return those aggregates either using indexes, table caching/updating, triggers etc in increasing order of desperation) - looping through 15000 rows after transfer to the client instead of a single row with 6 values is a no-brainer decision to me - minimize the wire and do it all on the server.
Cade Roux
Regarding the view comment, I've added some additional stuff to the posted question.
Jason N. Gaylord
If OrderDetails is a view, you'll need to look at the underlying table indexing - this can be guided by looking at the execution plan.
Cade Roux