views:

82

answers:

4

I'm getting these about 5 times an hour in my slow query logs:

# Query_time: 11.420629  Lock_time: 0.000033 Rows_sent: 0  Rows_examined: 0
SET timestamp=1267487708;
INSERT INTO record_lock (record_lock.module_id, record_lock.module_record_id, record_lock.site_id, record_lock.user_id, record_lock.expiration_date_time, record_lock.date_time_created) VALUES ('40', '12581', '940', '155254', '2010-03-02 00:24:57', '2010-03-01 23:54:57');

# Query_time: 2.095374  Lock_time: 0.000031 Rows_sent: 0  Rows_examined: 0
SET timestamp=1267488361;
DELETE
FROM record_lock
WHERE record_lock.user_id = 221659 AND record_lock.expiration_date_time IS NOT NULL;

The record_lock table currently uses InnoDB, and it has under a dozen records in it right now.

We have several thousand active users in our system. Each time they edit a record, we INSERT into this table. And on each and every page load anywhere in the system, we 1) SELECT from the table to see if there are any locks for the current user and 2) run a DELETE query against that table if there are any records for the user, referencing the table's primary keys in the WHERE clause.

Here is the table's schema:

CREATE TABLE IF NOT EXISTS `record_lock` (
  `module_id` int(10) unsigned NOT NULL,
  `module_record_id` int(10) unsigned NOT NULL,
  `site_id` int(10) unsigned NOT NULL,
  `user_id` int(10) unsigned NOT NULL,
  `expiration_date_time` datetime NOT NULL,
  `date_time_created` datetime DEFAULT NULL,
  PRIMARY KEY (`module_id`,`module_record_id`),
  KEY `record_lock_site_id` (`site_id`),
  KEY `index_user_id` (`user_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;
+1  A: 

Have you tried running an EXPLAIN on the queries?

Toby Hede
No, I've never used EXPLAIN. I'll give it a go.
Chad Johnson
Nothing stands out with an EXPLAIN for a SELECT query against that table.
Chad Johnson
+2  A: 

How many queries are you doing per second?

Could you not just put a locked field in the records themselves? I assume you are getting the record anyway. You could also use something like memcached for storing the locks.

I don't know the specifics off the top of my head, but my understanding is that InnoDB is great for concurrent reads, but sucks for concurrent writes. MyISAM might be better, but my gut tells me the current design is flawed.

Jackson Miller
InnoDB is much better for write-heavy applications than read-heavy applications.
Xorlev
I totally agree with the memcached idea. That may be the direction we head if we cannot find further optimizations.
Chad Johnson
Well, for sake of optimization, you might want to check out MyTop to watch the processes a little, see the ratio to inserts / updates / deletes, etc. I think we need more information about what is going on before you can optimize it.Do you know how frequently this table is getting written to and deleted? Are there only 12 records, but they are changing every second?
Jackson Miller
Okay, I'm getting the following: 0.29 inserts/s, 0.03 updates/s, 0.09 deletes/s, 20365.73 reads/s. 20.37K reads/s seems like a lot--do you agree? I'm not sure the 20K reads is all the same table.
Chad Johnson
Whoa! That seems like a ton to me. 1,000 would be a pretty big number.Can you share a little more about how many locks are getting checked on a typical page load? Could there be a looping cron job running somewhere?
Jackson Miller
+1  A: 

Is it possibly too many connections trying to hit the same table? You could try segmenting the table on user_id to help with that.

Jeff Beck
Well, thanks for the input, but that sounds terribly, terribly messy from a database administration perspective.
Chad Johnson
Should too many connections really be a problem with InnoDB and a couple thousand users?
Chad Johnson
A: 

Switching on the Innodb monitors can help narrow down the causes of poor performance:

SHOW ENGINE INNODB STATUS and the InnoDB Monitors

Martin
Okay, I'm getting the following: 0.29 inserts/s, 0.03 updates/s, 0.09 deletes/s, 20365.73 reads/s. 20.37K reads/s seems like a lot--do you agree? I'm not sure the 20K reads is all the same table. I also get "Foreign key constraint fails for table `record_lock`." Could the foreign key constraint failure slow things down?
Chad Johnson
I cannot see a foreign key constraint in the record-lock table. Do you have contraints from other tables into the record-lock table ? Looking again at the slow-log entries - the locks are not held for long, the record-lock table is small and so the actual statements should take no time: is it just that your system is getting maxed out doing other work ? 20k reads/sec could do that. How is the output from "iostat -dx " looking on your database ? Is cpu pegged ? %util ? Did you try the monitors - it can give a lot of detail on contention and so on.
Martin
Another thought: if you are serious about using MySQL, then this O'Reilly book is well worth the money:"High Performance MySQL: Optimization, Backups, Replication, and More"
Martin