tags:

views:

337

answers:

4

I have an existing application that I am working w/ and the customer has defined the table structure they would like for an audit log. It has the following columns:

storeNo 
timeChanged
user 
tableChanged 
fieldChanged 
BeforeValue 
AfterValue

Usually I just have simple audit columns on each table that provide a userChanged, and timeChanged value. The application that will be writing to these tables is a java application, and the calls are made via jdbc, on an oracle database. The question I have is what is the best way to get the before/after values. I hate to compare objects to see what changes were made to populate this table, this is not going to be efficient. If several columns change in one update, then this new table will have several entries. Or is there a way to do this in oracle? What have others done in the past to track not only changes but changed values?

+10  A: 

This traditionally what oracle triggers are for. Each insert or update triggers a stored procedure which has access to the "before and after" data, which you can do with as you please, such as logging the old values to an audit table. It's transparent to the application.

http://asktom.oracle.com/pls/asktom/f?p=100:11%3A0%3A%3A%3A%3AP11%5FQUESTION%5FID:59412348055

skaffman
I'd give you a million up votes if I could. You can't do this from the app, it has to be in a trigger or you will not capture all changes.
HLGEM
If you know that all database updates are going through the application, *and* your application uses a competant ORM layer, then it may be that ORM-layer versioning can do the job also.
skaffman
Damn. Beat me to the asnwer. +1 for TFGITW :)
DVK
@skaffman: A competant ORM layer should not have to read rows back to the application just to update them. Unless it does that for every update, it can't do the auditing. Also, lots of IFs there that may cease to be true in the future even if they are right now.
WW
Just to add a bit to it, someone on our team wrote an automatic audit generator that would make the CREATE TABLE and CREATE TRIGGER statements so it was easy to keep up with schema changes. It was pretty slick, and saved a ton of work.
aehiilrs
WW:A decent ORM layer should have some form of optimistic locking built in so that if the record has changed since it was picked up by the ORM, it doesn't blindly overwrite it with its ideas on what is the latest and greatest values. As such, the ORM should only be doing an update if it does know what the before and after values are.
Gary
@Gary, typically optimistic locking is implemented with a concurrency token, not by caching all values and comparing them.
Yishai
A: 

I'll ditto on triggers.

If you have to do it at the application level, I don't see how it would be possible without going through these steps:

  1. start a transaction
  2. SELECT FOR UPDATE of the record to be changed
  3. for each field to be changed, pick up the old value from the record and the new value from the program logic
  4. for each field to be changed, write an audit record
  5. update the record
  6. end the transaction

If there's a lot of this, I think I would be creating an update-record function to do the compares, either at a generic level or a separate function for each table.

Jay
+1  A: 

"the customer has defined the table structure they would like for an audit log"

Dread words.

Here is how you would implement such a thing:

create or replace trigger emp_bur before insert on emp for each row
begin
    if :new.ename = :old.ename then
        insert_audit_record('EMP', 'ENAME', :old.ename, :new.ename);
    end if;
    if :new.sal = :old.sal then
        insert_audit_record('EMP', 'SAL', :old.sal, :new.sal);
    end if;
    if :new.deptno = :old.deptno then
        insert_audit_record('EMP', 'DEPTNO', :old.deptno, :new.deptno);
    end if;
end;
/

As you can see, it involves a lot of repetition, but that is easy enough to handle, with a code generator built over the data dictionary. But there are more serious problems with this approach.

  1. It has a sizeable overhead: an single update which touches ten field will generate ten insert statements.
  2. The BeforeValue and AfterValue columns become problematic when we have to handle different datatypes - even dates and timestamps become interesting, let alone CLOBs.
  3. It is hard to reconstruct the state of a record at a point in time. We need to start with the earliest version of the record and apply the subsequent changes incrementally.
  4. It is not immediately obvious how this approach would handle INSERT and DELETE statements.

Now, none of those objections are a problem if the customer's underlying requirement is to monitor changes to a handful of sensitive columns: EMPLOYEES.SALARY, CREDIT_CARDS.LIMIT, etc. But if the requirement is to monitor changes to every table, a "whole record" approach is better: just insert a single audit record for each row affected by the DML.

APC
+2  A: 

If you use Oracle 10g or later, you can use built in auditing functions. You paid good money for the license, might as well use it.

Read more at http://www.oracle.com/technology/pub/articles/10gdba/week10%5F10gdba.html

jva
+1 for not reinventing the wheel.
Vincent Malgrat
Fine-Grained Audit requires an Enterprise Edition license, which not everybody springs for. But I agree that if we have paid for an EE licence then we should use the built-ins rather than rolling our own.
APC