views:

1440

answers:

11

I'm trying to find out whether I should be using business critical logic in a trigger or constraint.
So far I've added logic in triggers as it gives me the control over what happens next and means I can provide custom user messages instead of an error that will probably confuse the users.

Is there any noticable performance gain in using constraints over triggers and what are the best practices for determining which to use.

+3  A: 

Generally speaking I would prefer constraints and my code would catch sql server errors and present something more friendly to the user.

Sam Meldrum
+4  A: 

Triggers can blossom into a performance problem. About the same time that happens they've also become a maintenance nightmare. You can't figure out what's happening and (bonus!) the application behaves erratically with "spurious" data problems. [Really, they're trigger issues.]

No end-user touches SQL directly. They use application programs. Application programs contain business logic in a much smarter and more maintainable way than triggers. Put the application logic in application programs. Put data in the database.

Unless you and your "users" don't share a common language, you can explain the constraint violations to them. The alternative -- not explaining -- turns a simple database into a problem because it conflates the data and the application code into an unmaintainable quagmire.

"How do I get absolute assurance that everyone's using the data model correctly?"

Two (and a half) techniques.

  1. Make sure the model is right: it matches the real-world problem domain. No hacks or workaround or shortcuts that can only be sorted out through complex hand-waving explanations, stored procedures and triggers.

  2. Help define the business model layer of the applications. The layer of application code that everyone shares and reuses.

    a. Also, be sure that the model layer meets people's needs. If the model layer has the right methods and collections, there's less incentive to bypass it to get direct access to the underlying data. Generally, if the model is right, this isn't a profound concern.

Triggers are an train-wreck waiting to happen. Constraints aren't.

S.Lott
Surely by relying on applications to enforce the business critical logic is a problem waiting to happen. What about new applications, they will also have to include this business logic, and therefore relying on the app developer to not forget this. Controlling at DB end is surely the best way.
HAdes
I think that's why we have reusable code libraries and a separate model layer -- to assure that all applications are using the common business model. Anyway, that's what I do -- build a business model layer.
S.Lott
Logic should never be only in the application. It must be at the database level or data integrity is threatened when other applications or direct queries or imported data are put into the database.
HLGEM
@HLGEM: Make the business model layer actually do useful stuff and people won't bypass it, they'll just use it.
S.Lott
People will still often bypass the business logic layer. Even if the business layer is easy to use, someone will think they've got an easier way. My own philosophy is to include all business flow logic in the business layer, but data constraints at the DB. A DBMS is more than just a holding place
Tom H.
@Tom H: True, "someone will think they've got an easier way". Until their unit tests fail because they bypassed the business model layer. That's why some organizations have QA departments.
S.Lott
Remember, the code that is out of your hands, is in the hands of the enemy. In this case, the enemy is the jerk that thinks he knows better and tinkers with your data directly instead of using the API you gave him. Enforce integrity at the DB and he won't have a chance to hose it.
Coderer
@Coderer: When the model has problems, then there's two camps: the enemy who doesn't understand the workarounds and the rest of us, who do understand the workarounds and think they need to be enforced. When my model's bad, people work around it. Lesson learned: have the RIGHT model.
S.Lott
The new application may not be able to use the old model layer because it is using a new language (such as .NET vs Java).
RussellH
@RusselH: That doesn't make much sense. The relational database can always be used through a simple SQL API. If you're forced to use the lowest common denominator in access then triggers and stored procedures will become a fairly large problem, not a solution.
S.Lott
+6  A: 

Personlly I don't think triggers are quite as bad as they get discredit for, although I would always use a Constraint where possible. In a modern RDMS, the performance of triggers is comparable to contraints.

The debate of whether the business logic should reside in the application rather than the DB, depends to some extent on the environment; if you have many applications accessing the DB, both constraints and triggers can serve as final guard that data is correct.

Best Practice: if you can do it with a constraint, use a constraint.

Mitch Wheat
+8  A: 
Sklivvz
That's last statement is not entirely true. For example, what if inserting into one table requires a row to be inserted in another table? For that you need a trigger, and it does not imply there is anything wrong with the schema.
Mitch Wheat
Ok, but in that case your database is not normalized correctly!?
Sklivvz
+3  A: 

Constraints and triggers are for 2 different things. Constraints are used to constrain the domain (valid inputs) of your data. For instance, a SSN would be stored as char(9), but with a constraint of [0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9] (all numeric).

Triggers are a way of enforcing business logic in your database. Taking SSN again, perhaps an audit trail needs to be maintained whenever an SSN is changed - that would be done with a trigger,

In general, data integrity issues in a modern RDBMS can be handled with some variation of a constraint. However, you'll sometimes get into a situation where improper normalization (or changed requirements, resulting in now improper normalization) prevents a constraint. In that case, a trigger may be able to enforce your constraint - but it is opaque to the RDBMS, meaning it can't be used for optimization. It's also "hidden" logic, and can be a maintenance issue. Deciding whether to refactor the schema or use a trigger is a judgment call at that point.

Mark Brackett
A: 

I agree with everyone here about constraints. Use them as much as possible.

There is a tendency to overuse triggers, especially with new developers. I have seen situations where a trigger fires another trigger which fires another trigger that repeats the first trigger, creating a cascading trigger that ties up your server. This is a non-optimal user of triggers ;o)

That being said, triggers have their place and should be used when appropriate. They are especially good for tracking changes in data (as Mark Brackett mentioned). You need to answer the question "Where does it make the most sense to put my business logic"? Most of the time I think it belongs in the code but you have to keep an open mind.

wcm
+1  A: 

If at all possible use constraints. They tend to be slighlty faster. Triggers should be used for complex logic that a constraint can't handle. Trigger writing is tricky as well and if you find you must write a trigger, make sure to use set-based statements becasue triigers operate against the whole insert, update or delete (Yes there will be times when more than one record is affected, plan on that!), not just one record at a time. Do not use a cursor in a trigger if it can be avoided.

As far whether to put the logic inthe application insted of a trigger or constraint. DO NOT DO THAT!!! Yes, the applications should have checks before they send the data, but data integrity and business logic must be at the database level or your data will get meessed up when multiple applications hook into it, when global inserts are done outsiide the application etc. Data integrity is key to databases and must be enforced at the database level.

HLGEM
+1  A: 

@Mark Brackett: "Constraints are used to constrain the domain... Triggers are a way of enforcing business logic": It's not that simple in SQL Server because its constraints' functionality is limited e.g. not yet full SQL-92. Take the classic example of a sequenced 'primary key' in a temporal database table: ideally I'd use a CHECK constraint with a subquery to prevent overlapping periods for the same entity but SQL Server can't do that so I have to use a trigger. Also missing from SQL Server is the SQL-92 ability to defer the checking of constraints but instead they are (in effect) checked after every SQL statement, so again a trigger may be necessary to work around SQL Server's limitations.

onedaywhen
+2  A: 

@onedaywhen

You can have a query as a constraint in SQL Server, you just have to be able to fit it in a scalar function:http://www.eggheadcafe.com/software/aspnet/30056435/check-contraints-and-tsql.aspx

Meff
+1  A: 

@Meff: there are potential problems with the approach of using a function because, simply put, SQL Server CHECK constraints were designed with a single row as the unit of work, and has flaws when working on a resultset. For some more details on this, see: [http://blogs.conchango.com/davidportas/archive/2007/02/19/Trouble-with-CHECK-Constraints.aspx][1].

[1]: David Portas' Blog: Trouble with CHECK constraints.

onedaywhen
Very useful, thank you. I've only used a scalar UDF as a constraint once in a production db, and from that link it'll be OK, as the table only had 6 rows!
Meff
A: 

In addition to the other reasons to use constraints, the Oracle optimizer can use constraints to it's advantage.

For example, if you have a constraint saying (Amount >= 0) and then you query with WHERE (Amount = -5) Oracle knows immediately that there are no matching rows.

WW