views:

78

answers:

5

Imagine a meta database with a high degree of normalization. It would blow up this input field if I would attempt to describe it here. But imagine, every relationship through the entire database, through all tables, go through one single table called link. It has got these fields: master_class_id, master_attr_id, master_obj_id, class_id2, obj_id2. This table can easily represent all kinds of relationships: 1:1, 1:n, m:n, self:self.

I see the problem that this table is going to get HUUUUGE. Is that bad practice?

+6  A: 

That is wrong on two accounts:

  1. It'll be a tremendous bottleneck for all your queries and it'll kill any chance of throughput.

  2. It reeks of bad design: you should be able to describe things more concisely and closer to reality. If this is really the best way to store the data you can consider partitioning or even another paradigm instead of the relational

Vinko Vrsalovic
+3  A: 

It is usually bad practice but not because the table is huge. The problem is that you are mixing unrelated data in one table.

The reason to keep the links in separate tables, is because you won't need to use them together.

It is a common mistake that is also done with data itself: You should not mix two sets of data in one table only because the fields are similar if the data itself is unrelated.

Tomas
+4  A: 

In a word, yes, this is a bad idea

Without going into too many details, I would offer the following:

  • for a meta database, the link table should be split by (high level) entity : that is, you should have a separate link table for each entity
  • another link table is required for the between-entities links

Normally the high-level entities are fairly easy to identify, like Customer.

Steve De Caux
+1  A: 

I don't see the purpose of this type of table anyway. If you have table A that is one-to-many to table B then A is going to still have a PK and B will still have a PK. A would normally contain a FK to B.

So in the Master_Table you will have to store A PK, B FK which is just a duplicate of what is already there. The only thing you will 'lose' is the FK in table A but you just migrated it into a giant table that is hard to deal with by the database, the dba, and anyone coding using the db.

Those table appear in Access most frequently and show up on the DailyWTF because they are insanely hard to read and understand.

Oh! And a main problem is that to make the table ubiquitous you will have to make generic columns which will probably end up destroying data integrity.

Arthur Thomas
+2  A: 

Relational databases don't actually fit for this model.

It's possible to implement it but it will be quite slow. The main drawback is that you won't be able to index the links efficiently.

However, this design can be useful in two cases:

  1. This only stores the metadata: declared relationships between the entities. The actual data are stored in the plain relational tables, so this links are only used to show the structure but not in the actual queries.

  2. This stores some structures which are complex but contain few data, so that the ease of development overweights the performance drawbacks.

This design can be seen in several ORMs (one of which I even developed).

Quassnoi