views:

72

answers:

3

Hi,

I'm rewriting a stored procedure at work, primarily to stop it from doing dirty reading and have not made any significant structural changes. However, running the new version against the current version I have found the new version to run almost twice as long on a dev database which doesn't have a lot of activity!

Following the advice from this site: http://www.sql-server-performance.com/articles/per/optimizing_sp_recompiles_p1.aspx I used profiler to see what was happening and to my surprise there are a lot of "Cache Remove" for the new version but none for the current version!

Can anyone tell me what triggers the cache to be dropped?

I have all the temp table definition and index building up front (though the textbook says building indicies after INSERTs are generally better I've experimented with this approach and found that the sproc actually runs slower); and I've not made any schema changes to any referenced objects.

Thanks,

A: 

I just did some more experiment with this and found that having BEGIN and COMMIT TRANSACTION in the code block actually surpresses the cache remove though this is probably not desirable if it's going to hold on to a high utilized table for a long period of time :-\ is there any way to get around this without enclosing the block of code in a transaction?

theburningmonk
+2  A: 

Simple list:

  1. Statistic changes
  2. DBCC FREEPROCCACHE
  3. Usage/memory pressure etc

Must read articles:

  1. Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005
  2. SQL Programmability & API Development Team Blog

Probably need more info though as comment above...

gbn
A: 

Is the transaction encompassing only what needs to be atomic? Is the transaction necessary at all? For example if the update is:

insert into tbl (v1, v2, v3) select someValues from otherTable

That is atomic by default.

souLTower