tags:

views:

405

answers:

3

I have been struggling with a problem that only happens when the database has been idle for a period of time for the data queried. The first query will be extremely slow, on the order of 30 seconds and then related queries will be fast like 0.1 seconds. I am assuming this is related to caching, but I have been unable to find the cause of it.

Changing the mysql variables tmp_table_size, max_heap_table_size to a larger size had no effect except to create the temp tables in memory.

I do not think this is related to the query itself as it is well indexed and after the first slow query, variants of the same query do not show up in the slow query log. I am most interested in trying to determine the cause of this or a way to reset the offending cache so I can troubleshoot the issue.

+1  A: 

Is anything else running on your mysql server? My thought is that maybe after the first query, your table is still cached in memory. Once it's idle, another process is causing it to be de-cached. Just a guess though.

How much memory do you have any what else is running?

Brendan Long
mysql is the only thing running on the server. It has 16Gb of memory. I agree with it being de-cached from memory, but I am not able to figure out where it is caching so I can troubleshoot the issue. I am not sure if it is a mysql config problem, linux config problem, sql problem, etc. I have googled others having the same problem, but there was no insight as to what the problem is.
Dan Littlejohn
Based on this: http://dev.mysql.com/doc/refman/5.1/en/query-cache.html, maybe try making the query_cache_size bigger. It's strange though, because Linux should automatically keep this stuff cached whether or not MySQL tells it to. You could also try general-purpose methods to speed up your queries, like adding an index and not having huge tables, but I don't know if those would be helpful for you.
Brendan Long
the query_cache_size is set to zero so this is not the issue. I want to know what is caching it.
Dan Littlejohn
Sorry, I'm out of ideas..
Brendan Long
A: 

Ttry and compare the output of "vmstat 1" on the linux command line when running the query after a period of time, vs when you re-run it and get results fast. Specifically check the "bi" column (that's the kb read from disk per second).

You may find the operating system is caching the disk blocks in the fast case (and thus a low "bi" figure), but not in the slow case (and hence a large "bi" figure).

You might also find that vmstat shows high/low cpu usage in either case. If it's low when fast, and disk throughput is also low, then your system may still be returning a cached query, even though you've indicated the relevant config value is set to zero. Perhaps check the output of show engine innodb status and SHOW VARIABLES and confirm.

innodb_buffer_pool_size may also be set high (it should be...), which would cache the blocks even before the OS can return them.

You might also find that "key_buffer" is set high - this would cache the keys in the indexes, which could make your select blindingly fast.

Try check the mysql performance blog site for lots of useful info.

This is what is going on. The server is reading the innodb table from disk on the first query and then using the cache when running similar queries. Now I have to figure out how to reduce the size of the ibdata1. Considering moving to innodb_file_per_table
Dan Littlejohn
+1  A: 

Pages of the innodb data files get cached in the innodb buffer pool. This is what you'd expect. Reading files is slow, even on good hard drives, especially random reads which is mostly what databases see.

It may be that your first query is doing some kind of table scan which pulls a lot of pages into the buffer pool, then accessing them is fast. Or something similar.

This is what I'd expect.

Ideally, use the same engine for all tables (exceptions: system tables, temporary tables (perhaps) and very small tables or short-lived ones). If you don't do this then they have to fight for ram.

Assuming all your tables are innodb, make the buffer pool use up to 75% of the server's physical ram (assuming you don't run too many other tasks on the machine).

Then you will be able to fit around 12G of your database into ram, so once it's "warmed up", the "most used" 12G of your database will be in ram, where accessing it is nice and fast.

Some users of mysql tend to "warm up" production servers following a restart by sending them queries copied from another machine for a while (these will be replication slaves) until they add them into their production pool. This avoids the extreme slowness seen while the cache is cold. For example, Youtube does this (or at least it used to; Google bought them and they may now use Google-fu)

MarkR
the answer turned out to be changing the mysql config file and tune innodb. With increased sizes on innodb variables the database now does warm up after a few queries. I guess what was happening is the cache was too small to have all the table indexes in memory so it was constantly swapping from disk causing a slow first query. Now it has enough to keep them all in memory so the problem is gone.
Dan Littlejohn