views:

2122

answers:

8

I have triggers that manipulate and insert a lot of data into a Change tracking table for audit purposes on every insert, update and delete.

This trigger does its job very well, in other words, we are able to log the desired oldvalues/newvalues as per the business requirements for every transaction.

However in some cases where the source table has a lot columns, it can up to 30 seconds for the transaction to complete. This is the unacceptable part.

Is there a way to make the trigger run asynchronously? Any examples.

A: 

Not that I know of, but are you inserting values into the Audit table that also exist in the base table? If so, you could consider tracking just the changes. Therefore an insert would track the change time, user, extra and a bunch of NULLs (in effect the before value). An update would have the change time, user etc and the before value of the changed column only. A delete has the change at, etc and all values.

Also, do you have an audit table per base table or one audit table for the DB? Of course the later can more easily result in waits as each transaction tries to write to the one table.

Karl
+1  A: 

I wonder if you could tag a record for the change tracking by inserting into a "too process" table including who did the change etc etc.

Then another process could come along and copy the rest of the data on a regular basis.

GordyII
+1  A: 

There's a basic conflict between "does its job very well" and "unacceptable", obviously.

It sounds to me that you're trying to use triggers the same way you would use events in an OO procedural application, which IMHO doesn't map.

I would call any trigger logic that takes 30 seconds - no, more that 0.1 second - as disfunctional. I think you really need to redesign your functionality and do it some other way. I'd say "if you want to make it asynchronous", but I don't think this design makes sense in any form.

As far as "asynchronous triggers", the basic fundamental conflict is that you could never include such a thing between BEGIN TRAN and COMMIT TRAN statements because you've lost track of whether it succeeded or not.

le dorfier
You commented above that using Service Broker is "still breaking transaction control." I haven't used Service Broker, but wouldn't it be transactional?
Rob Garrison
It couldn't be if it's asynchronous. It can't be held in a transaction if you don't wait for it to finish to find out if it succeeded, to know whether to commit or roll ack.
le dorfier
I apologize for splitting this conversation between two comment trails.I would think that if SB can allow a rollback from its queue, then that would be transactional. Once you've written it to the queue, you consider it successful.You make a good point, but I would see it as more of a design/definition issue (assuming that the write to the SB queue can be rolled back as part of the overall transaction).
Rob Garrison
+2  A: 

You can't make the trigger run asynchronously, but you could have the trigger synchronously send a message to a SQL Service Broker queue. The queue can then be processed asynchronously by a stored procedure.

Sean Reilly
But then you're still breaking transaction control.
le dorfier
Can someone who really understands Service Broker explain whether the comment above ("breaking transaction control") is true?
Rob Garrison
In order to commit or roll back a transaction, you must wait until everything has succeeded (to commit), or something has failed (to roll back). Asynchronous means you don't wait for it to finish to continue the rest of the logic.
le dorfier
But you've committed the data to Service Broker's queue, and that queue itself is reliable. I guess it could fail after being successfully written to SB's queue, but that seems like a different issue. It's an interesting question.
Rob Garrison
If you rollback the transaction, it reverts the "send to queue".If an error occurs processing the queue, the processor can send a reply to the original message. Asynchronous processing isn't exactly like traditional sql, but you have all the tools you need for reliable processing.
Sean Reilly
Thanks for the interesting discussion.
Rob Garrison
A: 

I suspect that your trigger is of of these generic csv/text generating triggers designed to log all changes for all table in one place. Good in theory (perhaps...), but difficult to maintain and use in practice.

If you could run asynchronously (which would still require storing data somewhere for logging again later), then you are not auditing and neither do have history to use.

Perhaps you could look at the trigger execution plan and see what bit is taking the longest?

Can you change how you audit, say, to per table? You could split the current log data into the relevant tables.

gbn
A: 

Create history table(s). While updating (/deleting/inserting) main table, insert old values of record (deleted pseudo-table in trigger) into history table; some additional info is needed too (timestamp, operation type, maybe user context). New values are kept in live table anyway.

This way triggers run fast(er) and you can shift slow operations to log viewer (procedure).

Arvo
+3  A: 

these articles show how to use service broker for async auditing and should be useful:

Centralized Asynchronous Auditing with Service Broker

Service Broker goodies: Cross Server Many to One (One to Many) scenario and How to troubleshoot it

Mladen Prajdic
A: 

If you show us your trigger, we could possibly make suggestions to speed it up.

HLGEM