views:

439

answers:

1

I have a database that needs to be able to scale up to billions of entries or rows.

  • Can this many rows be supported per single table? Is it advisable?
  • Would a single table be split over several clusters if used in a NDBCLUSTER.
  • Other load balancing techniques?
  • What are some advisable methods of deploying such a database?
  • What are best practices for a database with this many rows to gain more performance?
  • Would MySQL do, or should I look elsewhere.
+2  A: 

We have tables with 22 million rows, and there's no bottleneck in sight. At least none that enough RAM can't fix. Generally there is no easy yes or no. It depends on the nature of your data, table engine, etc..

If you disclosed more info what kind of data it is that you're saving, then a response could be more detailed.

My only general advice for large databases is, that I'd exceed the hardware options before going into replication and/or sharding (for performance reasons -- keeping a slave for backup is a different story). You also need to know your index-fu and the obvious switches/options in order to tune the database server.

More info, if you can tell me what kind of data you're working with.

Till