views:

188

answers:

7

i have a table with about 200,000 records. it takes a long time to do a simple select query. i am confiused because i am running under a 4 core cpu and 4GB of ram. how should i write my query? or is there anything to do with INDEXING?

important note: my table is static (it's data wont change).

what's your solutions?

PS

1 - my table has a primary key id

2 - my table has a unique key serial

3 - i want to query over the other fields like where param_12 not like '%I.S%' or where param_13 = '1'

4 - 200,000 is not big and this is exactly why i am surprised.

5 - i even have problem when adding a simple field: my question

6 - can i create an INDEX for BOOL fields? (or is it usefull)

PS and thanks for answers

7 - my select shoudl return the fields that has specified 'I.S' or has not.

select * from `table` where `param_12` like '%I.S%'

this is all i want. it seems no Index helps here. ham?

+1  A: 

Yes, you'll want/need to index this table and partitioning would be helpful as well. Doing this properly is something you will need to provide more information for. You'll want to use EXPLAIN PLAN and look over your queries to determine which columns and how you should index them.

Another aspect to consider is whether or not your table normalized. Normalized tables tend to give better performance due to lowered I/O.

I realize this is vague, but without more information that's about as specific as we can be.

BTW: a table of 200,000 rows is relatively small.

Here is another SO question you may find useful

RC
As you said yourself, 200.000 records are not that many. I believe partitioning should not be required. A good index and index cache in memory should do the trick.
extraneon
I agree, I normally don't partition tables of this size though it still may provide performance benefits depending on the situation and the width of the table.
RC
A: 

Firstly, ensure your table have a primary key.

To answer in any more detail than that you'll need to provide more information about the structure of the table and the types of queries you are running.

Paul
A primary key is not the solution unless it's the key by which the data is queried.
extraneon
+3  A: 

Indexing will help. Please post table definition and select query.
Add index for all "=" columns in where clause.

Padmarag
+1  A: 

1 - my table has a primary key id: Not really usefull unless you use some scheme which requires a numeric primary key

2 - my table has a unique key serial: The id is also unique by definition; why not use serial as the primary? This one is automatically indexed because you defined it as unique.

3 - i want to query over the other fields like where param_12 not like '%I.S%' or where param_13 = '1': A like '%something%' query can not really use an index; is there some way you can change param12 to param 12a which is the first %, and param12b which is 'I.S%'? An index can be used on a like statement if the starting string is known.

4 - 200,000 is not big and this is exactly why i am surprised: yep, 200.000 is not that much. But without good indexes, queries and/or cache size MySQL will need to read all data from disk for comparison, which is slow.

5 - i even have problem when adding a simple field: my question

6 - can i create an INDEX for BOOL fields? Yes you can, but an index which matches half of the time is fairly useless, an index is used to limit the amount of records MySQL has to load fully as much as possible; if an index does not dramatically limit that number, as is often the case with boolean (in a 50-50 distribution), using an index only requires more disk IO and can slow searching down. So unless you expect something like an 80-20 distribution or better creating an index will cost time, and not win time.

extraneon
A: 

I don't believe that the keys you have will help. You have to index on the columns used in WHERE clauses.

I'd also wonder if the LIKE requires table scans regardless of indexes. The minute you use a function like that you lose the value of the index, because you have to check each and every row.

You're right: 200K isn't a huge table. EXPLAIN PLAN will help here. If you see TABLE SCAN, redesign.

duffymo
A: 

Index on param_13 might be used, but not the one on param_12 in this example, since the use of LIKE '% negate the index use.

Romuald Brunet
A: 

If you're querying data with LIKE '%asdasdasd%' then no index can help you. It will have to do a full scan every time. The problem here is the leading % because that means that the substring you are looking for can be anywhere in the field - so it has to check it all.

Possibly you might look into full-text indexing, but depending on your needs that might not be appropriate.

Vilx-