tags:

views:

3480

answers:

8

At what point does a MySQL database start to lose performance?

  • Does physical database size matter?
  • Do number of records matter?
  • Is any performance degradation linear or exponential?

Ive got (what I believe to be) a large database, almost 2GB and about 15M records. Should I be cleaning this out, or am I OK for another few years?

+14  A: 

The physical database size doesn't matter. The number of records don't matter.

In my experience the biggest problem that you are going to run in to is not size, but the number of queries you can handle at a time. Most likely you are going to have to move to a master/slave configuration so that the read queries can run against the slaves and the write queries run against the master. However if you are not ready for this yet, you can always tweak your indexes for the queries you are running to speed up the response times. Also there is a lot of tweaking you can do to the network stack and kernal in Linux that will help.

I have had mine get up to 10G, with only a moderate number of connections and it handled the requests just fine.

I would focus first on your indexes, than have a server admin look at your OS, and if all that doesn't help it might be time for a master/slave configuration.

Nick Berardi
A: 

@Nick:

My database isn't currently degrading (Or at least I don't think it is...It's hard to remember how responsive it was 4 years ago).

I just needed the info to make sure I wasn't about to run into trouble anytime soon.

@David:

Thanks, I'll look into how the database is layed out.

By the way, this is too good: The link you provided, mysqlperformanceblog.com/, goes to a page consisting of the following text(at the time of this writing): Error establishing a database connection

(It's back up now)

Grant
+9  A: 

In general this is a very subtle issue and note trivial whatsoever. I encourage you to read mysqlperformanceblog.com and High Performance MySQL. I really think there is no general answer for this.

I'm working on a project which has a MySQL database with almost 1TB of data. The most important scalability factor is RAM. If the indexes of your tables fit into memory and your queries are highly optimized, you can serve a reasonable amount of requests with a average machine.

The number of records do matter, depending of how your tables look like. It's a difference to have a lot of varchar fields or only a couple of ints or longs.

The physical size of the database matters as well: think of backups, for instance. Depending on your engine, your physical db files on grow, but don't shrink, for instance with innodb. So deleting a lot of rows, doesn't help to shrink your physical files.

There's a lot to this issues and as in a lot of cases the devil is in the details.

FDX5BX3

david
+1  A: 

Also watch out for complex joins. Transaction complexity can be a big factor in addition to transaction volume.

Refactoring heavy queries sometimes offers a big performance boost.

saint_groceon
+1  A: 

I once was called upon to look at a mysql that had "stopped working". I discovered that the DB files were residing on a Network Appliance filer mounted with NFS2 and with a maximum file size of 2GB. And sure enough, the table that had stopped accepting transactions was exactly 2GB on disk. But with regards to the performance curve I'm told that it was working like a champ right up until it didn't work at all! This experience always serves for me as a nice reminder that there're always dimensions above and below the one you naturally suspect.

jj33
+3  A: 

It's kind of pointless to talk about "database performance", "query performance" is a better term here. And the answer is: it depends on the query, data that it operates on, indexes, hardware, etc. You can get an idea of how many rows are going to be scanned and what indexes are going to be used with EXPLAIN syntax.

2GB does not really count as a "large" database - it's more of a medium size.

deadprogrammer
+1  A: 

I would focus first on your indexes, than have a server admin look at your OS, and if all that doesn't help it might be time for a master/slave configuration.

That's true. Another thing that usually works is to just reduce the quantity of data that's repeadetly worked with. If you have "old data" and "new data" and 99% of your queries work with new data, just move all the old data to another table - and don't look at it ;)

-> Have a look at partitioning.

BlaM
+1  A: 

2GB and about 15M records is a very small database - I've run much bigger ones on a pentium III(!) and everything has still run pretty fast.. If yours is slow it is a database/application design problem, not a mysql one.

ian