views:

20

answers:

1

Every place I've worked in the past 15 years has had, on almost every table they use, the columns of EntryDate and UpdateDate (or some variant). The developers or DBAs all put in in defaults for those columns of "getdate()" in sql server (I'm sure there is an equivalent in MySQL, Oracle, etc), but they don't ever use an update trigger to update the UpdateDate column.

Is there something terribly wrong with Triggers? Am I missing a reason why such a simple addition to a table could eliminate the problems with forgetful developers who (like me) sometimes forget to put in the UpdateDate in their update statements?

The same could be said, possibly, of 2 other related columns EntryUser and UpdateUser -- if you give every user a certain login into the DB, then you could use triggers to take care of these 2 columns, too.

What am I missing?

+3  A: 

Trigger are no differnt from other code, there are good triggers and porly written ones. The poorly written ones cause problems in both performance and data quality. Because so few people writing the triggers have understood how to write them correctly, triggers have gotten a bad reputation since so many of them are bad.

Developers also tend to forget about triggers and then get frustrated when they can't figure out why something they think is strange (but which is really the designed action) is happening. It is not the trigger's fault when the developers aren't competent to troubleshoot data issues.

For updated date, it is short-sighted at best not to populate with a trigger. Data is changed from more than the application, if you need the updated date, the trigger is the appropriate place for it. Unfortunately in today's world, the great God of "easy to maintain" is crippling many of our systems from both a performance and a data quality perspective.

HLGEM
+1 Maintainability is an idol with feet of clay.
APC