views:

131

answers:

6

I will create 5 tables, namely data1, data2, data3, data4 and data5 tables. Each table can only store 1000 data records.

When a new entry or when I want to insert a new data, I must do a check,

$data1 = mysql_query(SELECT * FROM data1);

<?php 
  if(mysql_num_rows($data1) > 1000){
    $data2 = mysql_query(SELECT * FROM data2);
    if(mysql_num_rows($data2 > 1000){
      and so on...
    }
  }

I think this is not the way right? I mean, If i am user 4500, it would take some time to do all the check. Is there any better way to solve this problem?

A: 

You can keep a "tracking" table to keep track of the current table between requests.

Also be on alert for race conditions (use transactions, or insure only one process is running at a time.)

Also don't $data1 = mysql_query(SELECT * FROM data1); with nested if's, do something like:

$i = 1;
do {
  $rowCount = mysql_fetch_field(mysql_query("SELECT count(*) FROM data$i"));
  $i++;
} while ($rowCount >= 1000);
Lance Rushing
+1  A: 

Read up on how to count rows in mysql.

Depending on what database engine you are using, doing count(*) operations on InnoDB tables is quite expensive, and those counts should be performed by triggers and tracked in a adjacent information table.

The structure you describe is often designed around a mapping table first. One queries the mapping table to find the destination table associated with a primary key.

memnoch_proxy
A: 

I'd be surprised if MySQL doesn't have some fancy-pants way to manage this automatically (or at least, better than what I'm about to propose), but here's one way to do it.

1. Insert record into 'data'
2. Check the length of 'data'
3. If >= 1000,
    - CREATE TABLE 'dataX' LIKE 'data';
      (X will be the number of tables you have + 1)
    - INSERT INTO 'dataX' SELECT * FROM 'data';
    - TRUNCATE 'data';

This means you will always be inserting into the 'data' table, and 'data1', 'data2', 'data3', etc are your archived versions of that table.

nickf
A: 

You can create a MERGE table like this:

CREATE TABLE all_data ([col_definitions]) ENGINE=MERGE UNION=(data1,data2,data3,data4,data5);

Then you would be able to count the total rows with a query like SELECT COUNT(*) FROM all_data.

Jason
A: 

If you're using MySQL 5.1 or above, you can let the database handle this (nearly) automatically using partitioning:

Read this article or the official documentation

Cassy
+3  A: 

I haven decided the numbers, it can be 5000 or 10000 data. The reason is flexibility and portability? Well, one of my sql guru suggest me to do this way

Unless your guru was talking about something like Partitioning, I'd seriously doubt his advise. If your database can't handle more than 1000, 5000 or 10000 rows, look for another database. Unless you have a really specific example how a record limit will help you, it probably won't. With the amount of overhead it adds it probably only complicates things for no gain.

A properly set up database table can easily handle millions of records. Splitting it into separate tables will most likely increase neither flexibility nor portability. If you accumulate enough records to run into performance problems, congratulate yourself on a job well done and worry about it then.

deceze
+1. Always check for the validity of such statements. In fact, if the person in question calls *himself* a guru, then disregard his advice immediately. ;-)
Duroth
+1 'cause I don't see any reason for this either. It does not make queries faster, it does not help with data security...the only reason I could think of is that the database-files should not be larger then xxxKB...but there's no reason for that either.
Bobby