views:

214

answers:

4

I was wondering if anyone has a suggestion for what kind of storage engine to use. The programs needs to perform a lot of writes to database but very few reads.

[edit] No foreign keys necessary. The data is simple, but it needs to preform the writes very fast.

+2  A: 

From jpipes:

MyISAM and Table-Level Locks

Unlike InnoDB, which employs row-level locking, MyISAM uses a much coarser-grained locking system to ensure that data is written to the data file in a protected manner. Table-level locking is the only level of lock for MyISAM, and this has a couple consequences:

  • Any connection issuing an UPDATE or DELETE against a MyISAM table will request an exclusive write lock on the MyISAM table. If no other locks (read or write) are currently placed on the table, the exclusive write lock is granted and all other connections issuing requests of any kind (DDL, SELECT, UPDATE, INSERT, DELETE) must wait until the thread with the exclusive write lock updates the record(s) it needs to and then releases the write lock.
  • Since there is only table-level locks, there is no ability (like there is with InnoDB) to only lock one or a small set of records, allowing other threads to SELECT from other parts of the table data.

The point is, for writing, InnoDB is better as it will lock less of the resource and enable more parallel actions/requests to occur.

Itay Moav
A: 

You've not really given us enough information to make a considered suggestion - are you wanting to use foreign keys? Row-level locking? Page-level locking? Transactions?

As a general rule, if you want to use transactions, InnoDB/BerkeleyDB. If you don't, MyISAM.

ninesided
A: 

In my experience, MyISAM is great for fast writes as long as, after insertion, it's read-only. It'll keep happily appending faster than any other option I'm familiar with (including supporting indexes).

But as soon as you start deleting records or updating index keys, and it needs to refill emptied holes (in tables or indexes) the discussion gets a lot more complicated.

For classic log-type or journal-type tables, though, it's very happy.

le dorfier
A: 

"It needs to perform the writes very fast" is a vague requirement. Whatever you do, writes may be delayed by contention in the database. If your application needs to not block when it's writing audit records to the database, you should make the audit writing asynchronous and keep your own queue of audit data on disc or in memory (so you don't block the main worker thread/process)

InnoDB may allow concurrent inserts, but that doesn't mean they won't be blocked by contention for resources or internal locks for things like index pages.

MyISAM allows one inserter and several readers ("Concurrent inserts") under the following circumstances:

  • The table has no "holes in it"
  • There are no threads trying to do an UPDATE or DELETE

If you have an append-only table, which you recreate each day (or create a new partition every day if you use 5.1 partitioning), you may get away with this.

MyISAM concurrent inserts are mostly very good, IF you can use them.

When writing audit records, do several at a time if possible - this applies whichever storage engine you use. It is a good idea for the audit process to "batch up" records and do an insert of several at once.

MarkR