views:

65

answers:

2

I am currently researching the best practises (at a reasonably high level) for application design for highly maintainable systems which result in minimal friction to change. By "Data Tier" I mean database design, Object Relation Mappers (ORMs) and general data access technologies.

From your experiences, what have you found to be the common mistakes & bad practises when it comes to data tier development and what measures have you taken / put in place / or can recommend to make the data tier a better place to be from a developer perspective?

An example answer may include: What is the most common causes of a slow, poorly scalable and extendible data tiers? + What measures can be taken (be it in design or refactoring) to cure this issue?

I am looking for war stories here and some real world advice that I can build into publicly available guidance documents and samples.

+1  A: 

Magic.

I have used Hibernate, which automatically stores and fetches objects from a database. It also supports lazy loading, so that a related object is only retrieved from the database when you ask for it. This works in some magic way I don't understand.

This all works fine as long as it works, but when it breaks down it is impossible to track it down. I think we had a problem when we combined Hibernate with AOP, that somehow the object was not yet initialized by Hibernate when our code was executed. This problem was very hard to debug, because Hibernate works in such mysterious ways.

Sjoerd
StevenH
To some degree, all programmers need to rely on components that they don't fully understand. The trick is getting a good balance between time spent reading up on components and getting started writing code.
Carlos
A: 

Object-Relational mapping is bad practice. By this, I mean that it tends to produce data schemas that can only loosely be described as "relational", and so they scale poorly and exhibit poor data integrity.

This is because properly relational schemas have been through the process of normalisation, whereas the results of O-R Mapping are normally object classes implemented as database tables. These will not normally have been normalised, but will instead have been designed for the immediate convenience of the OO developer.

Of course, in cases where the persistent data requirements are minimal, this is unimportant.

However, I once worked for a shipping company that had grown by taking over several other companies, and had outsourced development of an integrated operational system (to replace the various company-specific systems it had inherited) to a company using an OO methodology, with a data schema produced by O-R mapping. The performance characteristics of the system being developed were so poor, and the data schema so complex, that the shipping company dropped it after something like two years of development - before it even went live!

This was a direct consequence of the O-R mapping; the worst complexity in the schema (and the consequently poor performance) was caused by the existence of tables created solely as artifacts of the OO design process - they reflected screen layouts, not data relationships.

Mark Bannister
I think you're blaming O-R mapping for the poor implementation from the system developers
JonoW
@JonoW: I don't think so - the problems (as far as I could determine) were caused by a lack of normalisation, resulting in fragmentation of relational entities and poor data integrity. I have seen similar results (with less immediately catastrophic consequences, due to smaller and less complicated datasets) in other O-R mapping-developed schemas.
Mark Bannister
O-R mapping isn't a magic solve-your-problem solution either - it does introduce various under the hood complexities, and uses reflection a lot, so if performance is a factor I wouldn't use it. Saying that, it is easy to write your own code generator (or find someone elses) to generate the data layer from the db schema.
Mr Shoubs
@Mark which ORM did they use that forced them into a un-normalised db schema? That seems strange. If you look at the most popular ORM, Hibernate/NHibernate, the design of the db schema is still up to the developer, and it certainly plays nice with normalised schemas.
JonoW
@JonoW: I don't know. I do know that (in my experience) OO developers seem to loathe normalisation. This may be because if implemented within the object model it compromises the model, while if implemented within the data tier it (apparently) makes the process of migrating object changes to the data tier either much more complicated, or completely non-viable.
Mark Bannister