Hello all,
I have a big query and I am tring to improve it part by part, however due to the caching mechanism, and simplicity of t-sql codes, I don't have a reliable environment for testing the speeds. The queries that I am tring to improve about speeds are all last about 1 or 2 seconds, so I can't see the difference clearly. And creating dummy data for each comparision takes too much time. What do you suggest me to do ? I am using my company database, so removing cache everytime can be harmful I guess.
Edit: After reading all the comments, I made some tring and I got some idea. But looking all those values in statistics does it exactly what I want ?
Here are the problems that I faced:
Execution Plan: First I run some queries and looked at Execution Plan, at the top - Query cost (Relative to the batch) I couldn't get a value other than 0.00%. Even my query lasts more than 1 minutes. Only thing I get is 0.00%. And under the graphs, all the values are 0%
DB Statistics. Now I am testing two queries. One of them is
SELECT * FROM My_TABLE
/*
WHERE
my_primarykey LIKE '%ht_atk%'
*/
And the second one is the comment free version.
SELECT * FROM My_TABLE
WHERE
my_primarykey LIKE '%ht_atk%'
Here my results from DB Statistics, first query:.
Application Profile Statistics
Timer resolution (milliseconds) 0 0
Number of INSERT, UPDATE, DELETE statements 0 0
Rows effected by INSERT, UPDATE, DELETE statements 0 0
Number of SELECT statements 2 2
Rows effected by SELECT statements 16387 15748,4
Number of user transactions 7 6,93182
Average fetch time 0 0
Cumulative fetch time 0 0
Number of fetches 0 0
Number of open statement handles 0 0
Max number of opened statement handles 0 0
Cumulative number of statement handles 0 0
Network Statistics
Number of server roundtrips 3 3
Number of TDS packets sent 3 3
Number of TDS packets received 252 242,545
Number of bytes sent 868 861,091
Number of bytes received 1,01917e+006 981160
Time Statistics
Cumulative client processing time 0 0,204545
Cumulative wait time on server replies 25 10,0455
Second Query:
Application Profile Statistics
Timer resolution (milliseconds) 0 0
Number of INSERT, UPDATE, DELETE statements 0 0
Rows effected by INSERT, UPDATE, DELETE statements 0 0
Number of SELECT statements 2 2
Rows effected by SELECT statements 14982 15731,3
Number of user transactions 5 6,88889
Average fetch time 0 0
Cumulative fetch time 0 0
Number of fetches 0 0
Number of open statement handles 0 0
Max number of opened statement handles 0 0
Cumulative number of statement handles 0 0
Network Statistics
Number of server roundtrips 3 3
Number of TDS packets sent 3 3
Number of TDS packets received 230 242,267
Number of bytes sent 752 858,667
Number of bytes received 932387 980076
Time Statistics
Cumulative client processing time 1 0,222222
Cumulative wait time on server replies 8 10
Every single time I execute, the values are randomly changing and I am not able to catch a good view about which query is faster.
Lastly when I do that:
SET STATISTICS TIME ON SET STATISTICS IO ON
For both queries, the results are same.
Table 'my_TABLE'. Scan count 1, logical reads 682, physical reads 0, read-ahead reads 0.
So again I couldn't make a comparision between the two queries. how to interpret the results ? Am I looking to the wrong place. How can I compare those two simple queries above ?