views:

1071

answers:

5

Howdy!

I'm currently running some intensive SELECT queries against a MyISAM table. The table is around 100 MiB (800,000 rows) and it never changes.

I need to increase the performance of my script, so I was thinking on moving the table from MyISAM to the MEMORY storage engine, so I could load it completely into the memory.

Besides the MEMORY storage engine, what are my options to load a 100 MiB table into the memory?

A: 

Assuming the data rarely changes, you could potentially boost the performance of queries significantly using MySql query caching.

karim79
I was looking into memcache and some other cache solutions, but the issue with caching queries is that they are rarely the same queries. Most of the time is a different query that it's going to be called just once.
L. Cosio
A: 

If your table is queried a lot it's probably already cached at the operating system level, depending on how much memory is in your server.

MyISAM also allows for preloading MyISAM table indices into memory using a mechanism called the MyISAM Key Cache. After you've created a key cache you can load an index into the cache using the CACHE INDEX or LOAD INDEX syntax.

I assume that you've analyzed your table and queries and optimized your indices after the actual queries? Otherwise that's really something you should do before attempting to store the entire table in memory.

Emil H
I have already optimized the table and my queries.My table structure is something similar to this:id name lastname age sex street number city state ...And most of my queries look like this:SELECT FROM table WHERE column name = 'search parameter'
L. Cosio
A: 

If you have enough memory allocated for Mysql's use - in the Innodb buffer pool, or for use by MyIsam, you can read the database into memory (just a 'SELECT * from tablename') and if there's no reason to remove it, it stays there.

You also get better key use, as the MEMORY table only does hash-bashed keys, rather than full btree access, which for smaller, non-unique keys might be fats enough, or not so much with such a large table.

As usual, the best thing to do it to benchmark it.

Another idea is, if you are using v5.1, to use an ARCHIVE table type, which can be compressed, and may also speed access to the contents, if they are easily compressible. This swaps the CPU time to de-compress for IO/memory access.

Alister Bulman
A: 

If the data never changes you could easily duplicate the table over serveral database servers.

This way you could offload some queries to a different server, gaining some extra breathing room for the main server.

The speed improvement depends on the current database load, there will be no improvement if your database load is very low.

PS:
You are aware that MEMORY tables forget their contents when the database restarts!

Bob Fanger
+3  A: 

A table with 800k rows shouldn't be any problem to mysql, no matter what storage engine you are using. With a size of 100 MB the full table (data and keys) should live in memory (mysql key cache, OS file cache, or propably in both).

First you check the indices. In most cases, optimizing the indices gives you the best performance boost. Never do anything else, unless you are pretty sure they are in shape. Invoke the queries using EXPLAIN and watch for cases where no or the wrong index is used. This should be done with real world data and not on a server with test data.

After you optimized your indices the queries should finish by a fraction of a second. If the queries are still too slow then just try to avoid running them by using a cache in your application (memcached, etc.). Given that the data in the table never changes there shouldn't be any problems with old cache data etc.

Uwe Mesecke