views:

88

answers:

4

My MySql server currently has 235 databases. Should I worry? They all have same structure with MyISAM tables.

The hardware is a virtual machine with 2 GB RAM running on a Quad-Core AMD Opteron 2.2GHz.

Recently cPanel sent me an email saying that MySql has failed and a restart has been made.

New databases are being expected to be created and I wonder if I should add more memory or if I should simply add another virtual machine.

A: 

I dont believe there is a hard limit, the only thing that's really limiting you will be your hardware and the traffic these databases will be getting.

You seem to have very little memory, which probably means you dont have massive numbers of connections...

You should start by profiling usage for each database (or set of databases, depending on how they are used of course).

My suggestion - MySQL (or any database server for that matter) could use more memory. You can never have enough.

Ariel
A: 

You are doing it wrong.

Comment with some specifics about your databases, and we can probably fill you in on where your design went wrong.

plor
Thank you all for your comments.The system is something similar with Google Analytics.Users website's visits are being logged into a "master" table. A native application is monitoring the master table and processes the registered visits and writes them to users' database. Each user has its own DB. This has been decided for sharding. Various reports and statistics are being run for each user. And it is faster if it only runs on specific DB (less data)I know this is not the best setup. But we have to deal with it for a while.
xatalin
+1  A: 

The "databases" in mysql are really catalogues, is has no effect on its limits whether you put all the tables in one or each in its own.

The main problem is the table cache. Without tuning it, you're going to have the default table cache (=64 typically), which means you will be closing a table every time you open one. This is incredibly bad.

Except in MyISAM, it's even worse, because closing a table throws its key blocks out of the key cache, which means subsequent index lookups or scans will be reading actual blocks from disc, which is horrible and slow and really needs to be avoided.

My advice is:

  • If possible, immediately increase the table cache to > the total number of tables
  • Monitor the global status variable Opened_Tables in your monitoring; if it increases rapidly, this is bad.
  • Carry out performance and robustness testing on your the same hardware in a non-production environment (if you are not doing so already).
MarkR
I'd also add, check the file descriptor limit - while not directly connected to the no. of databases, you can run out of file descriptors with lots of MyISAM tables.
nos
The normal startup script will increase the number (under Linux) as necessary - it's certainly possible to have 10,000 without any problem. Also other engines may not use a file descriptor per table.
MarkR
A: 

(reposting my comment for better visibility) Thank you all for your comments. The system is something similar with Google Analytics. Users website's visits are being logged into a "master" table. A native application is monitoring the master table and processes the registered visits and writes them to users' database. Each user has its own DB. This has been decided for sharding. Various reports and statistics are being run for each user. And it is faster if it only runs on specific DB (less data) I know this is not the best setup. But we have to deal with it for a while.

xatalin