views:

150

answers:

6

I have simple db with 4 tables with a few millions rows each, and several indexes. I'm performing several hundred update and inserts on them per minute. The reads are much less frequent, but they need to be fast - for a web app. The reads should have priority - I can delay writes, if it helps improve the snappiness of reading.

Currently, when I'm not writing inserts and updates, the selects all hum along nicely. When I'm writing at the same time, things can slow down - sometimes massively. The server definitely gets IO bound - I've used iostat and seen the disk unitilization at 99% during periods of high writing.

Tomorrow I'm going to try and cut an index or two, shrink the row size, and disable the query cache. Does anyone have any other suggestions on how the table or mysql itself should be tuned for lots of writes and few reads?

The tables are currently set up to use the innodb engine with compact rows, and most of the config is still set to default apart from the buffer pool size. The db is going to continue to grow quickly, so having it all sit in ram isn't an option.

Update: It's on slicehost.com - 1gb ram, raid 10.

A: 

I suggest you do write/read splitting by setting up a mysql master-slave configuration.

You write to the master and redirect reads to the slaves.

The splitting itself can be done in two ways

  1. Use a proxy (MySQL Proxy, Continuent/Sequoia, ..)
  2. Do the splitting yourself in your application
jitter
Seems quite pointless to me. All the servers still have to handle the same write load. If reads are unfrequent it seems unlikely that replication will help.
Emil H
Thanks for responding to this Emil. I'd like to avoid pointless goose chases if I can. You may have saved me some hours here. Does anyone agree with Jitter's suggestion?
Tim Haines
+1  A: 

Indexes are going to slow the writes, but are necessary for read performance, so as few as you can get away with to support the reads. Is your clustered index going to cause a lot of slow down?

Another possibility is to read from a separate database/table from your writes and opt for eventual consistency - that may not be possible in your case.

Cade Roux
How would a clustered index cause slow downs? It's a big int, so adding minimal length to the secondary indexes. I would mind if the majority of new writes were only available in 3 minute chunks or something - most of it doesn't need to be instantly available.
Tim Haines
Sorry - I said 'I would mind' above. I meant 'I wouldn't mind'.
Tim Haines
A clustered index would cause slowdowns if it caused a lot more writes due to the insertion order. If it's a autoincrementing big-int, you shouldn't see a problem there.
Cade Roux
It's not auto-incrementing, but it should be generally incrementing - most entries near the max value.
Tim Haines
+1  A: 

One thing (from many) to consider is using transactions. If you can bundle several write operations under one transaction it should lower the number of disk access.

Itay Moav
I can't upvote yet otherwise I would. Thanks for the suggestion. It brought up some interesting google results that look like they'll help. (Including http://dev.mysql.com/doc/refman/5.1/en/innodb-tuning.html)
Tim Haines
A: 

If MySQL supported an index Fill Factor, that would be one area to look at. Unfortunately version 5 doesn't support Fill Factor (apparently it is on the feature request list for version 6.x).

  • Removing any un-used indexes, and limiting the width of your indexes would help.

  • Examine how much memory the server has.

  • Is the Disk RAID'ed?

Mitch Wheat
The server has 1gb of memory, and is on Raid 10. I won't be able to keep increasing the memory to fit the db - the db will continue to outgrow it. Is there a minimum I should be looking at? What role does RAID play?
Tim Haines
RAID will also slow writes and improve reads.
Cade Roux
@Cade Roux: not necessarily
Mitch Wheat
@Tim Haines: 1GB is quite small. Suggest you use at least 2GB
Mitch Wheat
+1  A: 

Unfortunately for you, MySQL is typically built for 80/20 read/write ratio. I don't know if there is a lot you can do.

Are you using transactions ?

If the data you select is not often affected by writing (so that modifying it at write time when there is a modification would not affect writing performance), you can externalize it at write time, e.g. at the end of the transaction.

streetpc
I'm not yet using transactions on the multiple writes, but I'll batch them into transactions tomorrow to see if that helps. I'll also look at increasing the log size and log buffer as per http://dev.mysql.com/doc/refman/5.1/en/innodb-tuning.html
Tim Haines
Okay, please give us feedback afterwards.
streetpc
+1  A: 

I think you need to consider partitioning. It's pretty much the only way to scale writes. MySQL has native support for this from 5.1 and onwards, but it's also quite possible to roll your own solution. The latter is much more complicated, so if possible I'd recommend using the built-in support. However, considering your excessive write load, it might not be enough. It's hard to give you more detailed advice without knowing how the data is structured, though.

Emil H
Wow - I had a total misconception on what partitioning was, so hadn't even bothered to read about it until now. Sounds great - will try it out tomorrow. Thanks for the suggestion.
Tim Haines
You might also want to search for "sharding". It's pretty much the same thing as partitioning, but emphasizes that it's a custom solution. I think you'll be able to find additional resources that way.
Emil H