views:

188

answers:

4

Hello all,

I have a big query and I am tring to improve it part by part, however due to the caching mechanism, and simplicity of t-sql codes, I don't have a reliable environment for testing the speeds. The queries that I am tring to improve about speeds are all last about 1 or 2 seconds, so I can't see the difference clearly. And creating dummy data for each comparision takes too much time. What do you suggest me to do ? I am using my company database, so removing cache everytime can be harmful I guess.

Edit: After reading all the comments, I made some tring and I got some idea. But looking all those values in statistics does it exactly what I want ?

Here are the problems that I faced:

Execution Plan: First I run some queries and looked at Execution Plan, at the top - Query cost (Relative to the batch) I couldn't get a value other than 0.00%. Even my query lasts more than 1 minutes. Only thing I get is 0.00%. And under the graphs, all the values are 0%

DB Statistics. Now I am testing two queries. One of them is

SELECT * FROM My_TABLE /* WHERE
my_primarykey LIKE '%ht_atk%' */

And the second one is the comment free version.

SELECT * FROM My_TABLE WHERE
my_primarykey LIKE '%ht_atk%'

Here my results from DB Statistics, first query:.

Application Profile Statistics      
  Timer resolution (milliseconds)   0   0
  Number of INSERT, UPDATE, DELETE statements   0   0
  Rows effected by INSERT, UPDATE, DELETE statements    0   0
  Number of SELECT statements   2   2
  Rows effected by SELECT statements    16387   15748,4
  Number of user transactions   7   6,93182
  Average fetch time    0   0
  Cumulative fetch time 0   0
  Number of fetches 0   0
  Number of open statement handles  0   0
  Max number of opened statement handles    0   0
  Cumulative number of statement handles    0   0

Network Statistics      
  Number of server roundtrips   3   3
  Number of TDS packets sent    3   3
  Number of TDS packets received    252 242,545
  Number of bytes sent  868 861,091
  Number of bytes received  1,01917e+006    981160

Time Statistics     
  Cumulative client processing time 0   0,204545
  Cumulative wait time on server replies    25  10,0455

Second Query:

Application Profile Statistics      
  Timer resolution (milliseconds)   0   0
  Number of INSERT, UPDATE, DELETE statements   0   0
  Rows effected by INSERT, UPDATE, DELETE statements    0   0
  Number of SELECT statements   2   2
  Rows effected by SELECT statements    14982   15731,3
  Number of user transactions   5   6,88889
  Average fetch time    0   0
  Cumulative fetch time 0   0
  Number of fetches 0   0
  Number of open statement handles  0   0
  Max number of opened statement handles    0   0
  Cumulative number of statement handles    0   0

Network Statistics      
  Number of server roundtrips   3   3
  Number of TDS packets sent    3   3
  Number of TDS packets received    230 242,267
  Number of bytes sent  752 858,667
  Number of bytes received  932387  980076

Time Statistics     
  Cumulative client processing time 1   0,222222
  Cumulative wait time on server replies    8   10

Every single time I execute, the values are randomly changing and I am not able to catch a good view about which query is faster.

Lastly when I do that:

SET STATISTICS TIME ON SET STATISTICS IO ON

For both queries, the results are same.

Table 'my_TABLE'. Scan count 1, logical reads 682, physical reads 0, read-ahead reads 0.

So again I couldn't make a comparision between the two queries. how to interpret the results ? Am I looking to the wrong place. How can I compare those two simple queries above ?

+1  A: 

Use the query analyzer to find out the expensive parts of your query (this depends on DB statistics, so use representative data).

This will let you zero in on the parts you should optimize.

Trying to time things with a stopwatch or looking at the time it takes for the results to return to SSMS will be guesswork at best.

Oded
"This will let you zero in on the parts you should optimize." what do you mean by "let you zero" ? I am not a native english speaker, so I didn't get. Another thing I will ask you is DB Stats. I think you ment about the part which says "Cumulative wait time on server replies" on DB Stats. But it changes everytime I execute my code. For example:Cumulative wait time on server replies4 3,30529e+0068 2,75441e+00613 2,06581e+0060 1,83627e+00636 1,27127e+006How should I interpret these values? Or am I looking to wrong place?I also didn't get what your last sentence.
stckvrflw
Zero in means to you will be able to find the exact area of the problem. Also, don't look at `Cumulative wait time`. Look at what parts of the query are taking the largest percentage - that's where the time is spent.
Oded
Hello, I updated the question. Where to see that part by part view. It is in the DB Stats? You can check my DB stats above.
stckvrflw
A: 

Good way is too see execution plain. It tell alot about how query will execute and what is taking most of the time. You can even decide to create indexes on that bases. Its very usefull specially of large queries. SQLServer most of the time find best possible way to execute query but you can improve that by providing it with index on field that are used in WHERE and JOIN statements. If you cannot read execution plain which is like graph with estimated cost and timing you can read in detail about it from MSDN.

affan
In execution plan, cost of everything seems to be %0. Even if my query lasts about 1 minute, still the Query Cost (relative to the batch) seems %0. İs this a problem btw ?
stckvrflw
+1  A: 

Run the set statistics time on and set statistics io on then run the big query in text mode. You can put some prints after each part of the query you want to optimize.

You will get lines like:

Table 'Table'. Scan count 1, logical reads 10, physical reads 0, read-ahead reads 0, lob    logical reads 387, lob physical reads 0, lob read-ahead reads 0.

Try to put some data in the tables and check the Scan Count and logical reads for big numbers.

You can also check the Actual Execution Plan and search for any clustered index scan. This may indicate that there is a missing index in some table.

Jose Chama
Those lines didn't differ from one another when I tried the compare the speeds of my two simple queries that I mentioned above while updating my question.
stckvrflw
See question comments...
Jose Chama
A: 

as @affan said, the best way is to use the information given by the execution plan. but you can always set up a simple counter with code like

IF @debug > 0 BEGIN
    DECLARE @now DATETIME;
    SET @now = CURRENT_TIMESTAMP;
END

and

IF @debug > 0 BEGIN
    SELECT DATEDIFF(ms,@now,CURRENT_TIMESTAMP)/1000.0 AS Runtime;
END
Dark
Counters are always giving different results. But what I want to see is, %100 true results while looking two queries.
stckvrflw