views:

178

answers:

4

When I run a certain stored procedure for the first time it takes about 2 minutes to finish. When I run it for the second time it finished in about 15 seconds. I'm assuming that this is because everything is cached after the first run. Is it possible for me to "warm the cache" before I run this procedure for the first time? Is the cached information only used when I call the same stored procedure with the same parameters again or will it be used if I call the same stored procedure with different params?

A: 

The execution plan (the cached info for your procedure) is reused every time, even with different parameters. It is one of the benefits of using stored procs.

The very first time a stored procedure is executed, SQL Server generates an execution plan and puts it in the procedure cache.

Certain changes to the database can trigger an automatic update of the execution plan (and you can also explicitly demand a recompile).

Execution plans are dropped from the procedure cache based an their "age". (from MSDN: Objects infrequently referenced are soon eligible for deallocation, but are not actually deallocated unless memory is required for other objects.)

I don't think there is any way to "warm the cache", except to perform the stored proc once. This will guarantee that there is an execution plan in the cache and any subsequent calls will reuse it.

more detailed information is available in the MSDN documentation: http://msdn.microsoft.com/en-us/library/ms181055(SQL.90).aspx

Tim Weckx
Your response, though I can't see anything wrong with it, misses the point. Query compilation doesn't take 1m45s, so this is not the OP's issue. The OPs caching issues has to do with the data page cache, not the execution plan cache.
erikkallen
+1  A: 

I don't think that generating the execution plan will cost more that 1 second.

I believe that the difference between first and second run is caused by caching the data in memory.

The data in the cache can be reused by any further query (stored procedure or simple select).

You can 'warm' the cache by reading the data through any select that reads the same data. But that will even cost about 90 seconds as well.

BeachBlocker
A: 

You can check the execution plan to find out which tables and indexes your query uses. You can then execute some SQL to get the data into the cache, depending on what you see.

  • If you see a clustered index seek, you can simply do SELECT * FROM my_big_table to force all the table's data pages into the cache.
  • If you see a non-clustered index seek, you could try SELECT first_column_in_index FROM my_big_table.

To force a load of a specific index, you can also use the WITH(INDEX(index)) table hint in your cache warmup queries.

erikkallen
+1  A: 

When you peform your query, the data is read into memory in blocks. These blocks remain in memory but they get "aged". This means the blocks are tagged with the last access and when Sql Server requires another block for a new query and the memory cache is full, the least recently used block (the oldest) is kicked out of memory. (In most cases - full tables scan blocks are instantly aged to prevent full table scans overrunning memory and choking the server).

What is happening here is that the data blocks in memory from the first query haven't been kicked out of memory yet so can be used for your second query, meaning disk access is avoided and performance is improved.

So what your question is really asking is "can I get the data blocks I need into memory without reading them into memory (actually doing a query)?". The answer is no, unless you want to cache the entire tables and have them reside in memory permanently which, from the query time (and thus data size) you are describing, probably isn't a good idea.

Your best bet for performance improvement is looking at your query execution plans and seeing whether changing your indexes might give a better result. There are two major areas that can improve performance here:

  • creating an index where the query could use one to avoid inefficient queries and full table scans
  • adding more columns to an index to avoid a second disk read. For example, you have a query that returns columns A, and B with a where clause on A and C and you have an index on column A. Your query will use the index for column A requiring one disk read but then require a second disk hit to get columns B and C. If the index had all columns A, B and C in it the second disk hit to get the data can be avoided.
Chris Latta