views:

102

answers:

5

I have a stored procedure on a busy database which constantly come out top in the list of expensive queries (by some way). The query is very simple, it takes a single parameter (@ID, int) which is the primary key of the table, and selects the record that matches that ID. The primary key is an identity field with a clustered index, so I am stumped as to how to optimise this any further?

The query is as follows

CREATE PROCEDURE [dbo].[P_Call_Get]

    @ID int = null

AS

    select ID,
        AppID,
        AgentID,
        AgentLogin,
        Ext,
        VDN,
        VDNName,
        Skill,
        SkillName,
        CallFrom,
        TelNoFrom,
        ParentCallID,
        CallStart,
        ACWStart,
        CallEnd,
        Outcome,
        StageID,
        TxTo,
        TxSuccess,
        ServiceID,
        DiallerID,
        CRC,
        TSCallID,
        CallDirection,
        [Manual],
        CallBackAgent,
        CallBackDateTime,
        Notes
    from P_Call
    where (ID = @ID or @ID is null)

Not sure the best way to post the execution plan - all it shows is that 100% of the operation is taken up by the clustered index scan

A: 

can you use table partion. it may fixed the issue.

samlet
+7  A: 

I think that by using where (ID = @ID or @ID is null) you are getting a sub optimal plan. Divide this into 2 separate queries so that in the case where @Id is not null it will just look it up directly and you will get a seek rather than a scan appear in the plan. You could maybe create a View with the columns you require to avoid the repetition (i.e. the Query without any where clause)

select ID,
    AppID,
    AgentID,
    AgentLogin,
    Ext,
    VDN,
    VDNName,
    Skill,
    SkillName,
    CallFrom,
    TelNoFrom,
    ParentCallID,
    CallStart,
    ACWStart,
    CallEnd,
    Outcome,
    StageID,
    TxTo,
    TxSuccess,
    ServiceID,
    DiallerID,
    CRC,
    TSCallID,
    CallDirection,
    [Manual],
    CallBackAgent,
    CallBackDateTime,
    Notes
from P_Call
Martin Smith
Yes, that's exactly it. +1
Quassnoi
Splitting into 2 queries did exactly that - I now get a seek rather than a scan in my execution plan. Which scares me a little as I have used this approach in a number of SPs which search on a number of optional criteria. Was going to ask about a better approach but I think optimising these is a separate question
Macros
I'm not sure if there is any way around the splitting into 2 queries approach. It can be a bit of a pain to keep complex queries in synch though especially if you are using this approach for multiple parameters in the query.
Martin Smith
@Macros There is no way of avoiding the split - SQL server only normally maintains a single execution plan per statement.
Kragen
A: 

Hi , how many row return this query ?

samlet
The query will only return 1 row
Macros
+1  A: 

Try cleaning out procedure cache and memory buffers:

DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

Doing so before testing your procedure's performance will prevent the use of cached execution plans and previous results cache.

RoadWarrior
A: 

How many rows are in the table? You do realize that a "clustered index scan" = full table scan.

Mike Ritacco
I didn't...but do now! have spent the last 2 days optimising SPs based on this
Macros
Good deal. You probably already figured it out why but I should have explained why the optimizer was selecting a sub optimal plan. The reason the index cannot be used is the (or @ID is null) portion of the WHERE clause. NULL values are not indexed.
Mike Ritacco