views:

2532

answers:

16

We have a requirement in project to store all the revisions(Change History) for the entities in the database. Currently we have 2 designed proposals for this:

e.g. for "Employee" Entity

Design 1:

// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"

// Holds the Employee Revisions in Xml. The RevisionXML will contain
// all data of that particular EmployeeId
"EmployeeHistories (EmployeeId, DateModified, RevisionXML)"

Design 2:

// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"

// In this approach we have basically duplicated all the fields on Employees 
// in the EmployeeHistories and storing the revision data.
"EmployeeHistories (EmployeeId, RevisionId, DateModified, FirstName, 
      LastName, DepartmentId, .., ..)"

Is there any other way of doing this thing?

The problem with the "Design 1" is that we have to parse XML each time when you need to access data. This will slow the process and also add some limitations like we cannot add joins on the revisions data fields.

And the problem with the "Design 2" is that we have to duplicate each and every field on all entities (We have around 70-80 entities for which we want to maintain revisions).

+5  A: 

The way that I've seen this done in the past is have

Employees (EmployeeId, DateModified, < Employee Fields > , boolean isCurrent );

You never "update" on this table (except to change the valid of isCurrent), just insert new rows. For any given EmployeeId, only 1 row can have isCurrent == 1.

The complexity of maintaining this can be hidden by views and "instead of" triggers (in oracle, I presume similar things other RDBMS), you can even go to materialized views if the tables are too big and can't be handled by indexes).

This method is ok, but you can end up with some complex queries.

Personally, I'm pretty fond of your Design 2 way of doing it, which is how I've done it in the past as well. Its simple to understand, simple to implement and simple to maintain.

It also creates very little overhead for the database and application, especially when performing read queries, which is likely what you'll be doing 99% of the time.

It would also be quite easy to automatic the creation of the history tables and triggers to maintain (assuming it would be done via triggers).

Matthew Watson
A: 

We have had similar requirements, and what we found was that often times the user just wants to see what has been changed, not necessarily roll back any changes.

I'm not sure what your use case is, but what we have done was create and Audit table that is automatically updated with changes to an business entity, including the friendly name of any foreign key references and enumerations.

Whenever the user saves their changes we reload the old object, run a comparison, record the changes, and save the entity (all are done in a single database transaction in case there are any problems).

This seems to work very well for our users and saves us the headache of having a completely separate audit table with the same fields as our business entity.

mattruma
+10  A: 

I think the key question to ask here is 'Who / What is going to be using the history'?

If it's going to be mostly for reporting / human readable history, we've implemented this scheme in the past...

Create a table called 'AuditTrail' or something that has the following fields...

[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [int] NULL,
[EventDate] [datetime] NOT NULL,
[TableName] [varchar](50) NOT NULL,
[RecordID] [varchar](20) NOT NULL,
[FieldName] [varchar](50) NULL,
[OldValue] [varchar](5000) NULL,
[NewValue] [varchar](5000) NULL

You can then add a 'LastUpdatedByUserID' column to all of your tables which should be set every time you do an update / insert on the table.

You can then add a trigger to every table to catch any insert / update that happens and creates an entry in this table for each field that's changed. Because the table is also being supplied with the 'LastUpdateByUserID' for each update / insert, you can access this value in the trigger and use it when adding to the audit table.

We use the RecordID field to store the value of the key field of the table being updated. If it's a combined key, we just do a string concatenation with a '~' between the fields.

I'm sure this system may have drawbacks - for heavily updated databases the performance may be hit, but for my web-app, we get many more reads than writes and it seems to be performing pretty well. We even wrote a little VB.NET utility to automatically write the triggers based on the table definitions.

Just a thought!

Chris Roberts
No need to store the NewValue, since it's stored in the audited table.
FreshCode
Strictly speaking, that's true. But - when there are a number of changes to the same field over a period of time, storing the new value makes queries such as 'show me all the changes made by Brian' so much easier as all the information about one update is held in one record. Just a thought!
Chris Roberts
A: 

It sounds like you want to track changes to specific entities over time, e.g. ID 3, "bob", "123 main street", then another ID 3, "bob" "234 elm st", and so on, in essence being able to puke out a revision history showing every address "bob" has been at.

The best way to do this is to have an "is current" field on each record, and (probably) a timestamp or FK to a date/time table.

Inserts have to then set the "is current" and also unset the "is current" on the previous "is current" record. Queries have to specify the "is current", unless you want all of the history.

There are further tweaks to this if it's a very large table, or a large number of revisions are expected, but this is a fairly standard approach.

Steve Moon
A: 

If you want to do the first one you might want to use XML for the Employees table too. Most newer databases allow you to query into XML fields so this is not always a problem. And it might be simpeler to have one way to access employee data regardless if it's the latest version or an earlier version.

I would try the second aproach though. You could simplify this by having just one Employees table with a DateModified field. The EmployeeId + DateModified would be the primary key and you can store a new revision by just adding a row. This way archiving older versions and restoring versions from archive is easier too.

Another way to do this could be the datavault model by Dan Linstedt. http://www.danlinstedt.com/ I did a project for the Dutch statistics bureau that used this model and it works quite well. But I don't think it's directly usefull for day to day database use. You might get some ideas from reading his papers though.

Mendelt
+3  A: 

Ramesh, I was involved in development of system based on first approach.
It turned out that storing revisions as XML is leading to a huge database growth and significantly slowing things down.
My approach would be to have one table per entity:

Employee (Id, Name, ... , IsActive)

where IsActive is a sign of the latest version

If you want to associate some additional info with revisions you can create separate table containing that info and link it with entity tables using PK\FK relation.

This way you can store all version of employees in one table. Pros of this approach:

  • Simple data base structure
  • No conflicts since table becomes append-only
  • You can rollback to previous version by simply changing IsActive flag
  • No need for joins to get object history

Note that you should allow primary key to be non unique.

aku
I would use a "RevisionNumber" or "RevisionDate" column instead of or in addition to IsActive, so you can see all revisions in order.
Sklivvz
A: 

@Matthew Watson This is exactly how I implement this kind of feature. It may start to make largish tables, but at least it's one table and you don't have to write complicate select into statements every time you update the original table.

This method is very much "Keep it Simple Stupid"

GateKiller
+6  A: 

We have implemented a solution very similar to the solution that Balloon suggests, and that works pretty well for us.

Only difference is that we only store the new value. The old value is after all stored in the previous history row

[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [int] NULL,
[EventDate] [datetime] NOT NULL,
[TableName] [varchar](50) NOT NULL,
[RecordID] [varchar](20) NOT NULL,
[FieldName] [varchar](50) NULL,
[NewValue] [varchar](5000) NULL

Lets say you have a table with 20 columns. This way you only have to store the exact column that has changed instead of having to store the entire row.

Kjetil Watnedal
+10  A: 
  1. Do not put it all in one table with an IsCurrent descriminator attribute. This just causes problems down the line, requires surrogate keys and all sorts of other problems.
  2. Design 2 does have problems with schema changes. If you change the Employees table you have to change the EmployeeHistories table and all the related sprocs that go with it. Potentially doubles you schema change effort.
  3. Design 1 works well and if done properly does not cost much in terms of a performance hit. You could use an xml schema and even indexes to get over possible performance problems. Your comment about parsing the xml is valid but you could easily create a view using xquery - which you can include in queries and join to. Something like this...

CREATE VIEW EmployeeHistory AS , FirstName, , DepartmentId

SELECT EmployeeId, RevisionXML.value('(/employee/FirstName)[1]', 'varchar(50)') AS FirstName,

RevisionXML.value('(/employee/LastName)[1]', 'varchar(100)') AS LastName,

RevisionXML.value('(/employee/DepartmentId)[1]', 'integer') AS DepartmentId,

FROM EmployeeHistories

Simon Munro
Why do you say not to store it all in one table with IsCurrent trigger. Could you point me to some examples where this would be come problematic.
Nathan W
+1  A: 

@Simon Munro, are you talking on basis of practice or theory? I worked with database based exactly on design #1. What I saw is a huge performance degradation related to XML parsing (Xquery support in MSSQL2005). Note that XML indices were in place which just made database size growing faster. Using XML for that task makes database design way to complex. You can use views to hide complex things but it just adds to database schema complexity.

aku
A: 

If indeed an audit trail is all you need, I'd lean toward the audit table solution (complete with denormalized copies of the important column on other tables, e.g., UserName). Keep in mind, though, that bitter experience indicates that a single audit table will be a huge bottleneck down the road; it's probably worth the effort to create individual audit tables for all your audited tables.

If you need to track the actual historical (and/or future) versions, then the standard solution is to track the same entity with multiple rows using some combination of start, end, and duration values. You can use a view to make accessing current values convenient. If this is the approach you take, you can run into problems if your versioned data references mutable but unversioned data.

Hank Gay
+2  A: 

This article in the Database Programmer blog might be useful - covers some of the points raised here and discusses the storage of deltas.

Mark Streatfield
A: 

How about:

  • EmployeeID
  • DateModified
    • and/or revision number, depending on how you want to track it
  • ModifiedByUSerId
    • plus any other information you want to track
  • Employee fields

You make the primary key (EmployeeId, DateModified), and to get the "current" record(s) you just select MAX(DateModified) for each employeeid. Storing an IsCurrent is a very bad idea, because first of all, it can be calculated, and secondly, it is far too easy for data to get out of sync.

You can also make a view that lists only the latest records, and mostly use that while working in your app. The nice thing about this approach is that you don't have duplicates of data, and you don't have to gather data from two different places (current in Employees, and archived in EmployeesHistory) to get all the history or rollback, etc).

gregmac
+1  A: 

Revisions of data is an aspect of the 'valid-time' concept of a Temporal Database. Much research has gone into this, and many patterns and guidelines have emerged. I wrote a lengthy reply with a bunch of references to this question for those interested.

Henrik Gustafsson
A: 

If you want to rely on history data (for reporting reasons) you should use structure something like this:

// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"

// Holds the Employee revisions in rows.
"EmployeeHistories (HistoryId, EmployeeId, DateModified, OldValue, NewValue, FieldName)"

Or global solution for application:

// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"

// Holds all entities revisions in rows.
"EntityChanges (EntityName, EntityId, DateModified, OldValue, NewValue, FieldName)"

You can save your revisions also in XML, then you have only one record for one revision. This will be looks like:

// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"

// Holds all entities revisions in rows.
"EntityChanges (EntityName, EntityId, DateModified, XMLChanges)"
dario-g
A: 

If you have to store history, make a shadow table with the same schema as the table you are tracking and a 'Revision Date' and 'Revision Type' column (e.g. 'delete', 'update'). Write (or generate - see below) a set of triggers to populate the audit table.

It's fairly straightforward to make a tool that will read the system data dictionary for a table and generate a script that creates the shadow table and a set of triggers to populate it.

Don't try to use XML for this, XML storage is a lot less efficient than the native database table storage that this type of trigger uses.

ConcernedOfTunbridgeWells