A: 

To me it still sounds as if the statistics were incorrect. Rebuilding the indexes does not necessarily update them.

Have you already tried an explicit UPDATE STATISTICS for the affected tables?

Tomalak
Yes, i already executed UPDATE STATISTICS on the table. As i wrote, the output from DBCC Show_Statistics is showing the right row count. I just wonder wtf SqlServer gets this large value on actual row count. There are no deletes on that table, so row count was never that high!
Jan
+1  A: 

It sounds like a case of Parameter Sniffing. Here's an excellent explanation along with possible solutions: I Smell a Parameter!

Here's another StackOverflow thread that addresses it: Parameter Sniffing (or Spoofing) in SQL Server

TrickyNixon
Yes, im aware of parameter sniffing. But i think my problem isn't about parameter sniffing, because i get the exactly same execution plan with using parameters and with copying parameter values to local vairables!
Jan
+1  A: 

When you're checking execution plans of the stored proc against the copy/paste query, are you using the estimated plans or the actual plans? Make sure to click Query, Include Execution Plan, and then run each query. Compare those plans and see what the differences are.

Brent Ozar
I'm comparing the acutal plans. They are definitely the same. And its not a copy/paste version which runs quick, its the same proc, but using local var copies of the parameter values. I will edit my post to clarify.
Jan
A: 

Have you run sp_spaceused to check if SQL Server's got the right summary for that table? I believe in SQL 2000 the engine used to use that sort of metadata when building execution plans. We used to have to run DBCC UPDATEUSAGE weekly to update the metadata on some of the rapidly changing tables, as SQL Server was choosing the wrong indexes due to the incorrect row count data.

You're running SQL 2005, and BOL says that in 2005 you shouldn't have to run UpdateUsage anymore, but since you're in 2000 compat mode you might find that it is still required.

Rick
sp_spaceused tells me correct data:ED_Transitions rows 1145711 data 160048 KB index_size 106048 KB
Jan
+2  A: 

Ok, finally i got to it myself.

The two query plans are different in a small detail which i missed at first. the slow one uses a nested loops operator to join two subqueries together. And that results in the high number at current row count on the index scan operator which is simply the result of multiplicating the number of rows of input a with number of rows of input b.

I still don't know why the optimizer decides to use the nested loops instead of a hash match which runs 1000 timer faster in this case, but i could handle my problem by creating a new index, so that the engine does an index seek statt instead of an index scan under the nested loops.

Jan
Glad you sorted it, Jan, and good spotting! Next time, maybe try outputting the plans in Text mode, and then using a textual diff or merge tool to compare them. You'll spot the difference(s) faster that way.
Rick