views:

752

answers:

3

We have an entity split across 5 different tables. Records in 3 of those tables are mandatory. Records in the other two tables are optional (based on sub-type of entity).

One of the tables is designated the entity master. Records in the other four tables are keyed by the unique id from master.

After update/delete trigger is present on each table and a change of a record saves off history (from deleted table inside trigger) into a related history table. Each history table contains related entity fields + a timestamp.

So, live records are always in the live tables and history/changes are in history tables. Historical records can be ordered based on the timestamp column. Obviously, timestamp columns are not related across history tables.

Now, for the more difficult part.

  1. Records are initially inserted in a single transaction. Either 3 or 5 records will be written in a single transaction.
  2. Individual updates can happen to any or all of the 5 tables.
  3. All records are updated as part of a single transaction. Again, either 3 or 5 records will be updated in a single transaction.
  4. Number 2 can be repeated multiple times.
  5. Number 3 can be repeated multiple times.

The application is supposed to display a list of point in time history entries based on records written as single transactions only (points 1,3 and 5 only)

I'm currently having problems with an algorithm that will retrieve historical records based on timestamp data alone.

Adding a HISTORYMASTER table to hold the extra information about transactions seems to partially address the problem. A new record is added into HISTORYMASTER before every transaction. New HISTORYMASTER.ID is saved into each entity table during a transaction. Point in time history can be retrieved by selecting the first record for a particular HISTORYMASTER.ID (ordered by timestamp)

Is there any more optimal way to manage audit tables based on AFTER (UPDATE, DELETE) TRIGGERs for entities spanning multiple tables?

+1  A: 

Your HistoryMaster seems similar to how we have addressed history of multiple related items in one of our systems. By having a single point to hang all the related changes from in the history table, it is easy to then create a view that uses the history master as the hub and attached the related information. It also allows you to not create records in the history where an audit is not desired.

In our case the primary tables were called EntityAudit (where entity was the "primary" item being retained) and all data was stored EntityHistory tables related back to the Audit. In our case we were using a data layer for business rules, so it was easy to insert the audit rules into the data layer itself. I feel that the data layer is an optimal point for such tracking if and only if all modifications use that data layer. If you have multiple applications using distinct data layers (or none at all) then I suspect that a trigger than creates the master record is pretty much the only way to go.

If you don't have additional information to track in the Audit (we track the user who made the change, for example, something not on the main tables) then I would contemplate putting the extra Audit ID on the "primary" record itself. Your description does not seem to indicate you are interested in the minor changes to individual tables, but only changes that update the entire entity set (although I may be miss reading that). I would only do so if you don't care about the minor edits though. In our case, we needed to track all changes, even to the related records.

Note that the use of an Audit/Master table has an advantage in that you are making minimal changes to the History tables as compared to the source tables: a single AuditID (in our case, a Guid, although autonumbers would be fine in non distributed databases).

Godeke
A: 

Can you add a TimeStamp / RowVersion datatype column to the entity master table, and associate all the audit records with that?

But an Update to any of the "child" tables will need to update the Master entity table to force the TimeStamp / RowVersion to change :(

Or stick a GUID in there that you freshen whenever one of the associated records changes.

Thinking that through, out loud, it may be better to have a table joined 1:1 to Master Entity that only contains the Master Entity ID and the "version number" fo the record - either TimeSTamp / RowVersion, GUID, incremented number, or something else.

Kristen
A: 

I think it's a symptom of trying to capture "abstract" audit events at the lowest level of your application stack - the database.

If it's possible consider trapping the audit events in your business layer. This would allow you to capture the history per logical transaction rather than on a row-by-row basis. The date/time is unreliable for resolving things like this as it can be different for different rows, and the same for concurrent (or closely spaced) transactions.

I understand that you've asked how to do this in DB triggers though. I can't comment directly on SQL Server, but in Oracle you can overcome this by using the DBMS_TRANSACTION.LOCAL_TRANSACTION_ID system package to return the ID for the current transaction. If you can retrieve an equivalent SQLServer value, then you can use this to tie the record updates for the current transaction together into a logical package.

Richard Nichols