views:

127

answers:

3

I need to work with several data samples, to say, N. The samples represent similar data but from different origins. For example, history of order in different shops. So the structure of all the samples is the same. To operate with the data I have several possibilities:

  1. Use N databases with identical schema, one for each sample

  2. Use one database, but N sets of tables. For example, User_1,..., User_N; Product_1, ..., Product_N, Order_1, ..., Order_N and so on.

  3. Use one database with one set of tables User, Product, Order, but add to each table a helper column which represents a sample index. Clearly, this column should be an index.

The last variant seems to be the most convenient for use because all queries become simple. In the second case I need to send a table name to a query (stored procedure) as a parameter (is it possible?).

So which way would you advise? The performance is very important.

+3  A: 

Step 1. Get a book on data warehousing -- since that's what you're doing.

Step 2. Partition your data into facts (measurable things like $'s, weights, etc.) and dimensions (non-measurable attributes like Product Name, Order Number, User Names, etc.)

Step 3. Build a fact table (e.g., order items) surrounded by dimensions of that fact. The order item's product, the order item's customer, the order item's order number, the order item's date, etc., etc. This will be one fact table and several dimension tables in a single database. Each "origin" or "source" is just a dimension of the basic fact.

Step 4. Use very simple "SELECT SUM() GROUP BY" queries to summarize and analyze your data.

This is the highest performance, most scalable way to do business. Buy Ralph Kimball's Data Warehouse Toolkit books for more details.

Do not build N databases with identical structure. Build one for TEST, and one for PRODUCTION, but don't build N.

Do not build N tables with identical structure. That's what keys are for.

S.Lott
step 2 and 3: there are also commercial parties who offer this kind of solutions.
R van Rijn
+1  A: 

Well, if you separate the databases, you'll have smaller tables. That's usually more performant. If you ever need to get to another database, that is possible with Microsoft SQL Server. If you need to get to a database on another server, that's possible too.

It will depend on how strongly correlated the data is.

Trevoke
There are no dependancy between data in different samples. Each sample is independett from other. BUt... what if there will be 100 databases? Is it allright?
flashnik
It depends on: how much RAM the server has, how many servers there are, how many people are accessing each database, how many read/writes per second, how much bandwidth they have available.By suggesting several databases, I am already guessing you will have several hundred megabytes of data per sample. If each sample is only using 50-100 meg of database space, then stick to one database and make your life easier by, as you mentioned, just adding an table with an id for each sample.
Trevoke
A: 

Here is one example. Each row of the fact table in the example has one line item from the order. The OrderID field can be used to find all items from a specific order.

alt text

Damir Sudarevic