I have a MyISAM table that basically contains a log. A cluster of machines does single-record INSERT
s on this table at a rate of 50 per second tops, but the same table is also SELECT
ed from by a web application, and indexed to accommodate for this. There are no UPDATE
s or DELETE
s, though.
So from what I've gathered, I should be using concurrent inserts. (Right?) MyISAM will normally do this for me without any extra work. (Is this correct?)
But what I can't find is a way to guarantee that a given INSERT
is processed concurrently. I know that I can set the global variable concurrent_insert
to 2
, but I'd rather not set this globally.
So my questions are:
Is there some way I'm missing to guarantee a concurrent insert?
If not, is there a command I can use to see whether a table meets the concurrent insert requirements? (I believe just knowing whether a table has holes should be enough?) Because I will also settle for being able to just monitor the table.
And I'm also curious, is there some other database system that can handle this kind of load better? I'm totally okay with a NoSQL solution, if that happens to be the case. As long as I can talk to it from Ruby and C.