views:

65

answers:

4

Lets say UserA and UserB both have an application open and are working with the same type of data. UserA inserts a record into the table with value 10 (PrimaryKey='A'), UserB does not currently see the value UserA entered and attempts to insert a new value of 20 (PrimaryKey='A'). What I wanted in this situation was a DBConcurrencyException, but instead what I have is a primary key violation. I understand why, but I have no idea how to resolve this. What is a good practice to deal with such a circumstance? I do not want to merge before updating the database because I want an error to inform the user that multiple users updated this data.

A: 

One solution might involve an INSTEAD OF INSERT Trigger on the table.

Here you'll be overriding the INSERT statement in your trigger. You'll have the chance to RAISEERROR when you detect that a row or value already exists for Primary Key 'A'.

p.campbell
Why adding complexity and make things slower? As the next answer correctly suggest, the exception is guaranteed to exist and is the correct one - at worst he can have application code catch it and rethrow the kind he likes most (even if I don't really see why he would like to do so).
p.marino
@p.marino: indeed - this is only one approach. YMMV.
p.campbell
+2  A: 

It's a design decision you have to make - do you want to use Pessimistic or Optimistic Locking?

I'm too lazy - quoted from this thread:

These are methodologies used to handle multi-user issues. How does one handle the fact that 2 people want to update the same record at the same time?

  1. Do Nothing

    • User 1 reads a record
    • User 2 reads the same record
    • User 1 updates that record
    • User 2 updates the same record

    User 2 has now over-written the changes that User 1 made. They are completely gone, as if they never happened. This is called a 'lost update'.

  2. Lock the record when it is read. Pessimistic locking

    • User 1 reads a record and locks it by putting an exclusive lock on the record (FOR UPDATE clause)
    • User 2 attempts to read and lock the same record, but must now wait behind User 1
    • User 1 updates the record (and, of course, commits)
    • User 2 can now read the record with the changes that User 1 made
    • User 2 updates the record complete with the changes from User 1

    The lost update problem is solved. The problem with this approach is concurrency. User 1 is locking a record that they might not ever update. User 2 cannot even read the record because they want an exclusive lock when reading as well. This approach requires far too much exclusive locking, and the locks live far too long (often across user control - an absolute no-no). This approach is almost never implemented.

  3. Use Optimistic Locking. Optimistic locking does not use exclusive locks when reading. Instead, a check is made during the update to make sure that the record has not been changed since it was read. This can be done by checking every field in the table. ie. UPDATE Table1 SET Col2 = x WHERE COL1=:OldCol1 AND COl2=:OldCol AND Col3=:OldCol3 AND... There are, of course, several disadvantages to this. First, you must have already SELECTed every single column from the table. Secondly, you must build and execute this massive statement. Most people implement this, instead, through a single column, usually called timestamp. This column is used for no other purpose than implementing optimistic concurrency. It can be a number or a date. The idea is that it is given a value when the row is inserted. Whenever the record is read, the timestamp column is read as well. When an update is performed, the timestamp column is checked. If it has the same value at UPDATE time as it did when it was read, then all is well, the UPDATE is performed and the timestamp is changed!. If the timestamp value is different at UPDATE time, then an error is returned to the user - they must re-read the record, re-make their changes, and try to update the record again.

    • User 1 reads the record, including the timestamp of 21
    • User 2 reads the record, including the timestamp of 21
    • User 1 attempts to update the record. The timestamp in had (21) matches the timestamp in the database(21), so the update is performed and the timestamp is update (22).
    • User 2 attempts to update the record. The timestamp in hand(21) does not match the timestamp in the database(22), so an error is returned. User 2 must now re-read the record, including the new timestamp(22) and User 1's changes, re-apply their changes and re-attempt the update.
OMG Ponies
Except that the two methods apply to updates to existing rows. The question is about inserts with the same primary key, in this case it's the DB that raises the exception, and the optimistic/pessimistic lock does not really apply.
p.marino
@p.marino: Pessimistic/Optimistic locking is for dealing with concurrency, which is more than updates.
OMG Ponies
As you may have noted, the example you (lazily) copied go in much details about *updates*. And in fact the optimistic lock is often implemented by adding an extra field with a numeric value which will be incremented by the first who accesses the record to update it. It's a common strategy with web applications with DB-based storage, for example. And it has little to do with the stated problem.
p.marino
+5  A: 

What I wanted in this situation was a DBConcurrencyException, but instead what I have is a primary key violation. I understand why

This is the correct Exception for this situation. You say you want to inform the user that this value has already been inserted so just catch the Primary key exception and then spit the user-friendly message back.

ajdams
+1 this is the easiest way to deal with the problem.
p.campbell
+2  A: 

If you get PK violations when concurrent users insert NEW records then one of two happens:

  • the violation occurs on a natural key, a key that has business value, like a user name or similar. The PK violation occurs due to a business process flaw, ie. two different operators try to insert the same business item. The logic how to react is entirely driven by business domain specific rules and we can't possibly give any advice.

  • the violation occurs on surrogate key, ie. an identifier like CustomerID or similar. In this case the flaw relies entirely with the application code as it means it uses a flawed algorithm to generate new IDs. Again, no valid advice can be given without understanding how the new IDs are generated.

Remus Rusanu