views:

168

answers:

4

I have a problem understanding read consistency in database (Oracle).

Suppose I am manager of a bank . A customer has got a lock (which I don't know) and is doing some updating. Now after he has got a lock I am viewing their account information and trying to do some thing on it. But because of read consistency I will see the data as it existed before the customer got the lock. So will not that affect inputs I am getting and the decisions that I am going to make during that period?

+3  A: 

The point about read consistency is this: suppose the customer rolls back their changes? Or suppose those changes fail because of a constraint violation or some system failure?

Until the customer has successfully committed their changes those changes do not exist. Any decision you might make on the basis of a phantom read or a dirty read would have no more validity than the scenario you describe. Indeed they have less validity, because the changes are incomplete and hence inconsistent. Concrete example: if the customer's changes include making a deposit and making a withdrawal, how valid would your decision be if you had looked at the account when they had made the deposit but not yet made the withdrawal?

Another example: a long running batch process updates the salary of every employee in the organisation. If you run a query against employees' salaries do you really want a report which shows you half the employees with updated salaries and half with their old salaries?

edit

Read consistency is achieved by using the information in the UNDO tablespace (rollback segments in the older implementation). When a session reads data fromn a table which is beingf changed by another session, Oracle retrieves the UNDO information which has been generated by that second session and substitutes it for the changed data in the result set presented to the first session.

If the reading session is a long running query it might fail because due to the notorious ORA-1555: snapshot too old. This menas the UNDO extent which contained the information necessary to assemble a read consistent view has been overwritten.

Locks have nothing to do with read consistency. In Oracle writes don't block reads. The purpose of locks is to prevent other processes from attempting to change rows we are interested in.

APC
Okay I got your point.It is about what happens if the transaction that has got lock fails? or what if the manager read live data before transaction is complete? By the way, do you know the internal mechanism of read consistency. Do locks come into play for ensuring read consistency? (I guess locks comes into play just to prevent concurrent transaction and that locks are not a must for read consistency).
+1  A: 

For systems that have large number of users, where users may "hold" the lock for a long time the Optimistic Offline Lock pattern is usually used, i.e. use the version in the UPDATE ... WHERE statement.

You can use a date, version id or something else as the row version. Also the virtual columm ORA_ROWSCN may be used but you need to read up on it first.

oluies
ORA_ROWSCN is not dependable for optimistic locking, see: http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2680538100346782134
Shannon Severance
A: 

The application needs to protect against lost updates, which occur when two sessions try to update the same row, having queried an earlier version of the row. This can occur even if no locks are in conflict. There are a number of methods usable in Oracle, that I've blogged here and in subsequent posts.

Jeffrey Kemp
A: 

When a record is locked due to changes or an explicit lock statement, an entry is made into the header of that block. This is called an ITL (interested transaction list). When you come along to read that block, your session sees this and knows where to go to get the read consistent copy from the rollback segment.

Joe Shawfield