views:

228

answers:

7

I have over 1.500.000 data entries and it's going to increase gradually over time. This huge amount of data would come from 150 regions.

Now should I create 150 tables to manage this increasing huge data? Will this be efficient? I need fast operation. ASP.NET and Oracle will be used.

A: 

If you mean 1,500,000 rows in a table then you do not have much to worry about. Oracle can handle much larger loads than that with ease.

If you need to identify the regions that the data came in, you can create a Region table and tie the ID from that to the big data table.

IMHO, you should post more details and we can help you better.

Raj More
+5  A: 

If all the data is the same, don't split it in to different tables. Take a look at Oracle's table partitions. One-hundred fifty partitions (or more) split out by region (or more) is probably more in line with what you're going to be looking for.

I would also recommend you look at the Oracle Database Performance Tuning Tips & Techniques book and browse Ask Tom on Oracle's website.

Rob
Partitioning is a chargeable extra to the Enterprise Edition i.e. it is way expensive. But worth the dosh if it solves your problem.
APC
+4  A: 

Only 1.5 M rows? Not a lot really...

Use one table; working out how to write a 150-way union across 150 tables will be murder.

Jonathan Leffler
Murder? More like suicide. :-)
Tomalak
The murder will be committed by those who have to write it out; you could claim it was 'suicide' by the designer who induced the others to commit murder. :D
Jonathan Leffler
+1  A: 

1.5 million rows doesn't really seem like that much. How many people are accessing the table(s) at any given point? Do you have any indexes setup? If you expect it to grow much larger, you may want to look into partitioning in databases.

FWIW, I work with databases on a regular basis with 100M+ rows. It shouldn't be this bad unless you have thousands of people using it at a time.

llamaoo7
A: 

A database with 2,000 rows can be slow. It all depends on your database design, index, keys and most important is the hardware configuration your database server is running on. The way your application uses this data is also important. Is a read intensive database or transaction intensive? There is no right answer to what you are asking right now.

rick schott
+1  A: 

1 table per region is way not normalized; you're probably going to lose a bunch of efficiency there. 1 table per data entry site is pretty unusual too. Normalization is huge, it will save you a ton of time down the road, so I'd make sure you're not storing any duplicate data.

If you're using oracle, you shouldn't need to have multiple tables. It'll support a lot more than 1.5 million rows. If you need to speed up data access, you can try a snowflake schema to pull in commonly accessed data.

Satanicpuppy
A: 

You first need to consider what operations are going to access the table. How will inserts be performed? Will the existing rows be updated, and if so how? By how much will the rows grow, and what percentage of them will grow? Will rows get deleted? By what criteria? How will you be selecting data? By what criteria and how many per query?

David Aldridge