views:

136

answers:

7

Hello to all,

I am setting up a system to host WordPress blogs for users of a site I run. Right now, things are functioning pretty well within one database and different blogs running with their own prepended tables (user1_posts, user_posts, etc).

Despite this working so far, it feels a bit messy. If this database were to have 4000 tables, would that be an issue? Would it be better to split that into 400 databases? (Or am I missing a smarter way of doing this?)

Thank you!

+6  A: 

Wordpress Mu is what you need

Shadi Almosri
Wow, thank you! Wish I'd found this last week. :S
Nicky Hajal
+2  A: 

Add a customer_id column to your tables and go vertical

james
+1  A: 

What you are doing is essentially giving them a separate database as the data for each user will exist in isolation. I'd probably suggest splitting them out into their own databases.

Where it gets tricky is if you have a table where all users are sharing their data. That's when you need to think about whether or not you want to keep their data in the one place or seperately

lomaxx
+3  A: 

The moment you introduce a digit into your column name, table name, or database name, you are very much likely doing something wrong.

While there are exceptions to this rule, they are rare. Please note that.

Randal Schwartz
fwiw, this is exactly what WordPress MU does how that I've had a chance to mess with it. It ends up creating wp_1_posts, wp_2_posts, etc. Supposedly it's running wordpress.com, so it can't be too terrible in this case.
Nicky Hajal
By that logic, Windows is a superior operating system, simply because everyone uses it!:-)
Randal Schwartz
A: 

Depending on the size of the table and on the database software - you may want to consider partitioning.

If the table is not large, and ultimate security is not a requirement, having a customer_id column will work.

4000 tables in a schema is largish, it may or may not cause performance issues - depending on the database.

maintaining 400 databases will likely become a large headache. Think well about routine maintenance - backups, upgrades, etc and how you'll do them when you have 400 databases. I personally would avoid this route.

Maybe a middle road? a database for each 50 customers?

+1  A: 

It depends what engine you're using. If you're using MyISAM, performance will suck incredibly badly if you have more tables than the table cache size set in my.cnf.

Setting the table cache to > the total number of tables is vital, otherwise it will have to keep opening and closing the tables, which blows out the index cache in the key buffer, meaning heaps of unnecessary IO.

It doesn't make any difference if you use multiple databases or the same one, the table cache for MyISAM has the safe effect.

The same thing may happen for other engines, but I think the effect is much less pronounced (e.g. InnoDB doesn't lose its cache when closing a table, but it's still not free)

MarkR
A: 

4000 tables is a little big.

In case you plan to split your tables in different databases... you can look at having Meta Database arrangement.. one Meta DB sitting over all different databases containing connection details for different databases... and common information as well.. this way you should be able to scale easily.. and with existing running system.. you would need just a wrapper over your Meta DB to support your running app...

I am not sure how much this will help!!!!

Nitin Mittal