views:

35

answers:

2

I have a stored procedure that goes something like this (pseudo code)

  storedprocedure param1, param2, param3, param4
  begin
     if (param4 = 'Y')
         begin
             select * from SOME_VIEW order by somecolumn
         end
     else if (param1 is null)
          begin
             select * from SOME_VIEW
                where (param2 is null or param2 = SOME_VIEW.Somecolumn2)
                and (param3 is null or param3 = SOME_VIEW.SomeColumn3) 
             order by somecolumn
          end
     else
          select somethingcompletelydifferent
     end

All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this:

  storedprocedure param1, param2, param3, param4
  begin
     if (param4 = 'Y')
         begin
             set param2 = null
             set param3 = null
         end
     if (param1 is null)
          begin
             select * from SOME_VIEW
                where (param2 is null or param2 = SOME_VIEW.Somecolumn2)
                and (param3 is null or param3 = SOME_VIEW.SomeColumn3) 
             order by somecolumn
          end
     else
          select somethingcompletelydifferent

And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that:

  1. Queries that ran fine for two years just stop working as the data grows.
  2. The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine?

This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.

+3  A: 

There could be a couple of causes

1) are statistics up to date?

2) you could be suffering from parameter sniffing

BTW for this kind of stuff

where (param2 is null or param2 = SOME_VIEW.Somecolumn2)

Take a look at Do you use Column=@Param OR @Param IS NULL in your WHERE clause? Don't, it doesn't perform

SQLMenace
And to add to that, "all" databases can get troubles like that - i.e. execution plans can go monkey as data grows/statistics are out of date/the dbms have trouble handling all that data/etc./etc.
nos
Thanks for the pointer, but in this case it performs better than a straight select with no parameters, and dynamicSQL can have its own problems. So we take it case by case.
Yishai
+1  A: 

I would imagine this specific instance of the problem, and all the conditions that lead to this happening are specific to SQL server - probably even the edition. (E.g. SQL Server 2008 would behave differently.)

But this is a general "feature" of query optimizers. They look at your query and try to make an informed guess as to what will execute the fastest. As users, we have little direct control if the optimizer chooses (say) an Index Scan or an Index Seek, but can influence it indirectly by providing alternative ways of expressing the same thing, to see if that invokes improved execution time.

If there have not been any other schema changes that might influence the query, then check that the index statistics are updated. We use a weekly batch job to do this.

mdma
We actually do update them every week. It is possible that an update set everything off, as the problem showed up the first business day after such an update.
Yishai