tags:

views:

33

answers:

2

I'm writing a stored procedure in SQL Server that joins about 10 tables. When I exclude the join of this particular table from the query, logical reads looks reasonable. When I include it in though, logical reads of some of the tables rise to 9000. This table has primary key clustered, and I'm joining it just like the other tables in the query (anotherTable.FK = thisTable.PK)

Could the logical reads SQL Server is showing me incorrect? What are things I can look for to improve this behavior?

Thanks

A: 

It's difficult to determine the problem by not knowing all the information. 10 tables is a fairly large query.

I would verify the query plan looks correct. Is it using the indexes and not table scanning? Is it joining in the order that you expect?

Also, using temp tables has helped me SQL tuning. Break the large query into smaller pieces by using select into #tempTable. Temp tables have made the difference for me before.

jdot
+1  A: 

Clustered primary keys are only going to take you so far - they just determine the semi-physical organization of rows in pages according to the clustered index. All the data is stored in the tree as opposed to the index, so it's not nearly as efficient for read - you're still effectively reading the entire row.

For higher performance, you really want covering non-clustered indexes (these are usually narrower and so fit more data per page), and you want to see index seeks or index scans when you look at the execution plans. These only read the information in the index (plus any included columns) and will fit a lot more per page and be able to go a lot faster (not as much unnecessary data being read and discarded or skipped over).

Cade Roux