views:

73

answers:

4

Hi,

I have a database efficiency question.

Here is some info about my table:

-table of about 500-1000 records -records are added and deleted every day. - usually have about the same amount being added and deleted every day (size of active records stays the same)

Now, my question is.....when I delete records,...should I (A) delete the record and move it to a new table?

Or,...should I (B) just have and "active" column and set the record to 0 when it is no long active.

The reason I am hesitant to use B is because my site is based on the user being able to filter/sort this table of 500-1000 records on the fly (using ajax)....so I need it to be as fast as possible,..(i'm guessing a table with more records would be slower to filter)...and I am using mySQL InnoDB.

Any input would be great, Thanks

Andrew

A: 

It depends on what you are filtering/sorting on and how the table is indexed.

A third, and not uncommon, option, you could have a hybrid approach where you inactivate records (B) (optionally with a timestamp) and periodically archive them to a separate table (A) (either en masse or based on the timestamp age).

Realistically, if your table is in the order 1000 rows, it's probably not worth fussing too much over it (assuming the scalability of other factors is known).

Rog
A: 

If you need to keep the records for some future purpose, I would set an Inactive bit.

As long as you have a primary key on the table, performance should be excellent when SELECTing the records.

Also, if you do the filtering/sorting on the client-side then the records would only have to be retrieved once.

John Rasch
+3  A: 

~1000 records is a very small number.

If a record can be deleted and re-added later, maybe it makes sense to have an "active" indicator.

duffymo
+1. 1000 rows is nothing, InnoDB will chew through that in no time even without proper indices (though you should obviously have those anyway).
bobince
thank you, i will keep that in mind
Andrew
+1  A: 

Realistically, this isn't a question about DB efficiency but about network latency and the amount of data you're sending over the wire. As far as MySQL goes, 1000 rows or 100k rows are going to be lightning-fast, so that's not a problem.

However, if you've got a substantial amount of data in those rows, and you're transmitting it all to the client through AJAX for filtering, the network latency is your bottleneck. If you're transmitting a handful of bytes (say 20) per row and your table stays around 1000 records in length, not a huge problem.

On the other hand, if your table grows (with inactive records) to, say, 20k rows, now you're transmitting 400k instead of 20k. Your users will notice. If the records are larger, the problem will be more severe as the table grows.

You should really do the filtering on the server side. Let MySQL spend 2ms filtering your table before you spend a full second or two sending it through Ajax.

Adam Bellaire
very useful information, thank you.
Andrew