views:

97

answers:

2

Does anyone know how to reduce the 'Number of executions' you can see in the execution plan of a SQL Query in MS SQL Server?

We have a query that runs slow on one production server (others are fine). When checking the execution plan, we see that one of the first steps is a Clustered Index Scan. On the servers that run fine, this scan is executed once. On the one that runs slow, it is executed 4 times.

We read here (http://www.sql-server-performance.com/tips/query_execution_plan_analysis_p2.aspx) that you should try to reduce it, but we can't find any tips on how to do this.

Both servers run the same version of Windows (2003) and SQL Server (2000).

Any help would be appreciated.

+5  A: 

Is the db schema really the same? If it's so then try to update statistics and defrag your indexes and tables.

http://msdn.microsoft.com/en-us/library/aa260645(SQL.80).aspx

http://technet.microsoft.com/en-us/library/cc966523.aspx

Arthur
A: 

Sorry I'm answering this myself. Thanks Arthur for the links, they were useful (so I up-voted your answer). We updated the statistics but it didn't help. We checked the defragmentation with DBCC SHOWCONTIG, but it wasn't very different from the other databases.

I'm sorry for not posting the query AdaTheDev, but I'd have to change all the columns and stuff (because it's fairly business critical stuff), and we've found a solution now.

We noticed the table had much more records than the other databases. We added indexes on the correct columns. We did a join like:

inner join table2 t2 on t1.t1Id = t2.t1Id and t2.someOtherId = @parameter
where t2.aThirdId = @otherParameter

So we added three separate indexes for each Id column, which sped it up from 30s to 3-5s (we did have an index on t1Id and someOtherId together though).

In the end we also did a clean up of the table, because a lot of records were logically deleted and no longer necessary. But the indexes did the trick.

Peter