tags:

views:

101

answers:

4

Hi guys,

I was wondering if anyone ever had a change to measure how a would 100 joined tables perform? Each table would have an ID column with primary index and all table are 1:1 related.

It is a common problem within many data entry applications where we need to collect 1000+ data points. One solution would be to have one big table with 1000+ columns and the alternative would be to split them into multiple tables and join them when it is necessary.

So perhaps more real question would be how 30 tables (30 columns each) would behave with multitable join.

500K-1M rows should be the expected size of the tables.

Cheers

+3  A: 

As a rule of thumb, anymore than 25 joins might be a performance problem. I try to keep joins below 10-15. It depends on the database activity and number of concurrent users, and the ratio of reads/writes.

Suggest you look at indexed views.

With any well tuned database, 'good' indexes for the query workload are the key .

Mitch Wheat
The real killer isn't the number of joins -- joining one large table to 25 very small against those tables' unique clustered indexes is typical, and is often quite fast indeed. It's when you have to scan large amounts of data against joins of larger tables that you get into trouble. And that's what it sounds like he's about to to here. Bah!
Dave Markle
@Dave Markle: True. I was simplifying.
Mitch Wheat
10-15? Suddenly I don't feel so bad about joining like 5 tables...
Mark
lol I'm glad I helped.Yeah I'm thinking about that the 500K rows will be bigger killer the the 30+ joins
deian
@deian: It's all about what you return and how it's indexed. If your queries actually only return one row at a time, joined 30 ways, you probably won't have an issue as long as you have decent indexes. If you actually want to return all those rows, you'll be in a world of hurt.
Dave Markle
well the general reason why one would split the data is because you use it in pieces. An Example: personal details, address, employment etc. those are tables linked by an ID but you usually use the data separately. on rare occasions you'd need everything (the whole 1000+ datapoints) together. In case when you do queries resulting in many rows it would be accross 2-5 tables max (I'd think)
deian
+1  A: 

They'd most likely perform terribly, unless you had a very small number of rows per table.

Go for a wider table, but normalize it properly. My guess is that if you normalize your data properly, you will have a slightly more sane design.

Dave Markle
A: 

There's no way to better organise the tables? For example a "DataPointTypes" and "DataPointValues" table?

For example (and I don't know your particular circumstances) if all of your tables are like "WebsiteDataPoints (WebsitePage, Day, Visits)" "StoreDataPoints (Branch, Week, Sales)" etc. you could instead have

DataPointSources(Name) 
 (with data: Website,Store)

DataPointTypes(SourceId, ColumnName) 
 (with data: (Website, WebsitePage), (Website, Day), (Store, Branch), (Store, Sales) etc.)

DataPointEntry(Id, Timestamp)

DataPointValues (EntryId, Value(as varchar probably)) 
 (with data: (1, Website-WebsitePage, 'pages.php'), (2, Store-Branch, 'MainStore'), (1, Website-Day, '12/03/1980'), (2, Store-Sales '35') etc.)

In this way each table becomes a source, each column becomes a type, each row becomes an entry, and each cell becomes a value.

Graphain
could you elaborate a bit on the idea?
deian
Elaborated, see my edit. I think this might be appropriate for you given the information you've given. Remember to be using int ids for everything though, this is just 'pseudo-schema' to give you a quick idea of what I mean.
Graphain
A: 

What you describe is similar to the implementation of column-oriented database (wikipedia). The data is stored in "column major" format which slows down adding each row, but is much faster for querying in the case of a where clause which restricts the returned rowset.

Why is it that you would rather split the rows? Is it that you measure the data elements for each row at different times? Or is it that the query result of a row would be very large?

Since first posting this, you answered me below that your reason for desiring a split of the table is that you usually only work with a subset of the data.

In that case, splitting the table can help your performance (amount of runtime consumed by the query) some amount. This may be an important factor in your wanting to work with less data -- in the case where your database engine runs slowly with large rows.

If performance is not a concern, rather than using SQL JOINs, it might serve you to explicitly list the columns you wish to retrieve in each query. For example, if you only wish to retrieve width, height, and length for a row, you could use: SELECT width, height, length FROM datatable; rather than SELECT * FROM datatable; and accomplish the same improvement of getting less data returned. The SQL statements used would probably be shorter than the alternative join statements we were considering.

Heath Hunnicutt
well the reason for splitting the rows is because you usually work with a subset of columns. you need the whole record only when you do export or something like that.
deian