views:

19

answers:

1

I have a MyISAM table that basically contains a log. A cluster of machines does single-record INSERTs on this table at a rate of 50 per second tops, but the same table is also SELECTed from by a web application, and indexed to accommodate for this. There are no UPDATEs or DELETEs, though.

So from what I've gathered, I should be using concurrent inserts. (Right?) MyISAM will normally do this for me without any extra work. (Is this correct?)

But what I can't find is a way to guarantee that a given INSERT is processed concurrently. I know that I can set the global variable concurrent_insert to 2, but I'd rather not set this globally.

So my questions are:

  • Is there some way I'm missing to guarantee a concurrent insert?

  • If not, is there a command I can use to see whether a table meets the concurrent insert requirements? (I believe just knowing whether a table has holes should be enough?) Because I will also settle for being able to just monitor the table.

  • And I'm also curious, is there some other database system that can handle this kind of load better? I'm totally okay with a NoSQL solution, if that happens to be the case. As long as I can talk to it from Ruby and C.

A: 

Why don't you want to set concurrent_insert=2 globally? That would give you what you want.

Another option you may want to consider for this type of MyISAM table is INSERT DELAYED: http://dev.mysql.com/doc/refman/5.1/en/insert-delayed.html

Ike Walker
There are other databases and tables for which I consider that undesirable. I like the `DELAYED` pointer, though. Thanks!
Shtééf