views:

49

answers:

1

For a particular apps I have a set of queries that I run each time the database has been restarted for any reason (server reboot usually). These "prime" SQL Server's page cache with the common core working set of the data so that the app is not unusually slow the first time a user logs in afterwards.

One instance of the app is running on an over-specced arrangement where the SQL box has more RAM than the size of the database (4Gb in the machine, the DB is under 1.5Gb currently and unlikely to grow too much relative to that in the near future). Is there a neat/easy way of telling SQL Server to go away and load everything into RAM?

It could be done the hard way by having a script scan sysobjects & sysindexes and running SELECT * FROM <table> WITH(INDEX(<index_name>)) ORDER BY <index_fields> for every key and index found, which should cause every used page to be read at least once and so be in RAM, but is there a cleaner or more efficient way? All planned instances where the database server is stopped are out-of-normal-working-hours (all the users are at most one timezone away and unlike me none of them work at silly hours) so such a process (until complete) slowing down users more than the working set not being primed at all would is not an issue.

+1  A: 

I'd use a startup stored proc that invoked sp_updatestats

  1. It will benefit queries anyway
  2. It already loops through everything anyway (you have indexes, right?)
gbn
do you have it configured so that it does a full scan and not just a sampling?
Dave Markle
I would use normal sampling. Data is loaded in cache in 64k extents (8 pages), so you need to sample one row per extent. Unless you have very wide rows, but even then you'd get readahead IO with Enterprise edition too). It could miss stuff, but it's easy to do and even 5% sampling with 3 rows per page would load 100%
gbn