views:

430

answers:

5

I have a very slow query that I need to run on a MySQL database from time to time.

I've discovered that attempts to update the table that is being queried are blocked until the query has finished.

I guess this makes sense, as otherwise the results of the query might be inconsistent, but it's not ideal for me, as the query is of much lower importance than the update.

So my question really has two parts:

1) Out of curiosity, what exactly does MySQL do in this situation? Does it lock the table for the duration of the query? Or try to lock it before the update?

2) Is there a way to make the slow query not blocking? I guess the options might be: i) Kill the query when an update is needed. ii) Run the query on a copy of the table as it was just before the update took place iii) Just let the query go wrong.

Anyone have any thoughts on this?

Thanks,

Ben

A: 

I don't know MySQL, But it sounds like transaction problem. You should be able to set transaction typ to Dirty Read in your select query.

That won't nessarily give you correct results. But it should'nt be blocked.

Better would be to make the first query go faster. Do some analyzing and check if you can speed it up with correct indeing and so on.

Richard L
A: 

UPDATE LOW_PRIORITY .... may be helpful - the mysql docs aren't clear whether this would let the user requesting the update continue and the update happen when it can (which is what I think happens) or whether the user has to wait (which would be worse than at present ...), and I can't remember.

What table types are you using? If you are on MyISAM, switching to InnoDB (if you can - it has no full text indexing) opens up more options for this sort of thing, as it supports the transactional features and row level locking.

benlumley
Thanks. Unfortunately I'm stuck with MyISAM... Otherwise it sounds like switching to InnoDB would be the answer...
Ben
UPDATE LOW_PRIORITY doesn't allow UPDATEs at the same time as SELECTs. It has to do with the queueing order for when deciding what next lock to grant.
Harrison Fisk
+1  A: 

For the purposes of the select statement, you should probably issue a:
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
command on the connection, which causes the subsequent select statements to operate without locking.
Don't use the 'SELECT ... FOR UPDATE', as that definitely locks the table rows that are affected by the select statement.
The full list of msql transaction isloation levels are in the docs.

Petesh
Hmm... I tried the SET SESSION TRANSACTION... command, and it doesn't seem to have made any difference...
Ben
Ah... I think SET TRANSACTION only applies to InnoDB, and I'm using MyISAM (unfortunately!)
Ben
+1  A: 

First off all you need to know what engine you´re using (MySam or InnoDb).

This is clearly a transaction problem.

Take a look a the section 13.4.6. SET TRANSACTION Syntax in the mysql manual.

DonOctavioDelFlores
+5  A: 

It sounds like you are using a MyISAM table, which uses table level locking. In this case, the SELECT will set a shared lock on the table. The UPDATE then will try to request an exclusive lock and block and wait until the SELECT is done. Once it is done, the UPDATE will run like normal.

MyISAM Locking

If you switched to InnoDB, then your SELECT will set no locks by default. There is no need to change transaction isolation levels as others have recommended (repeatable read is default for InnoDB and no locks will be set for your SELECT). The UPDATE will be able to run at the same time. The multi-versioning that InnoDB uses is very similar to how Oracle handles the situation. The only time that SELECTs will set locks is if you are running in the serializable transaction isolation level, you have a FOR UPDATE/LOCK IN SHARE MODE option to the query, or it is part of some sort of write statement (such as INSERT...SELECT) and you are using statement based binary logging.

InnoDB Locking

Harrison Fisk