views:

42

answers:

1

I did create two new indexes on the tables that are used on a the sp. The new results shows that on the part of problematic joins, the scans are converted to seek. I think seek is better rather than scan operations. On the other hand, the time takes more or less the same duration as it was without new indexes.

So clearly, how can i get satisfied before putting my new version sp to production.

For instance, changing parameters of sp can help me to see if the new version faster than old version or what else?

Regards bk

+1  A: 

A few things to do:
1) ensure you are comparing performance fairly by clearing the data and execution plan cache after each test run. You can clear these down using (recommend only doing this on your dev/test environment):

CHECKPOINT -- force dirty pages in the buffer to be written to disk
DBCC DROPCLEANBUFFERS -- clear the data cache
DBCC FREEPROCCACHE -- clear the execution plan cache

2) Run SQL Profiler to record the Reads/Writes/CPU/Duration for each situation (with/without the indexes). This will give you a range of metrics to compare on (i.e. as opposed to just the time shown in SSMS).
Edit: To run an SQL Profiler trace, in Management Studio go to Tools -> SQL Server Profiler. WHen prompted, specify the db server to run the trace against. A "Trace Properties" dialog will appear - you should just be able to click "Run" to start running a default trace. Then just execute your stored procedure and see it appear in SQL Profiler - it will show the Duration, number of reads etc alongside it.

3) Test with much larger volumes of data than you already have. If you test with small amounts of data, then the difference is often difficult to see on duration alone.

I recently blogged here about how to fairly test the performance of different variants of a query, which goes into a bit more detail about how I do it.

AdaTheDev
Thanks Ada,Could you please extend the second item above.Regards bk
blgnklc
@blgnklc - done :)
AdaTheDev
I have been doing the items above,but item one did not work as you mentioned on your blog like comparing 2 paper boys to see which one can complete a given round the quickest. The clear methods seem does not work out. Once I run the SP old version, takes a lot of time for the first time then, I use the clear methods to run SP new version.On the other hand, when I clear again and run the SP old version for the second time then it takes so little time.First time, for old version SP: 4:24 minutesSecond time, for old version SP: 0:18 minutesAm i missing something?
blgnklc
@blgnklc - If you're doing all 3 lines of the "clear down" between each run of the sproc, and there is such a vast difference between times, then there must be something else affecting it (e.g. the server in general under heavy load)
AdaTheDev
@AdaTheDev - Then what is the point of this message below after that three lines have been ran. I thought it works out every new SP operations... Then i see that there are oher things may affect..DBCC execution completed. If DBCC printed error messages, contact your system administrator.DBCC execution completed. If DBCC printed error messages, contact your system administrator.
blgnklc
@blgnklc - Im not sure I follow. Those messages are just the output from those 3 "cleardown" lines of SQL, just saying that the DBCC commands being run have completed OK. After that block of 3 statements has finished, then when you next run the sproc it will be running on a clear cache.
AdaTheDev
@AdaTheDev could you please have a look: @http://stackoverflow.com/questions/2363640/query-execution-plan-missing-index
blgnklc