views:

642

answers:

7
+3  Q: 

Database deadlocks

One of the classical reason we have a database deadlock is when two transactions are inserting \updating tables in a different order.

e.g. Transaction A inserts in Table A then Table B

and Transaction B inserts in Table B followed by A

Such a scenario is always at risk of a database deadlock (assuming you are not using serializable isolation level)

My question is

A) What kind of patterns do you follow in your design to make sure that all transactions are inserting\updating in the same order. A book I was reading- had a suggestion that you can sort the statements by the name of the table. Have you done something like this or different - which would enforce that all inserts\updates are in the same order ?

B) What about deleting records ? Delete needs to start from child tables and updates\inserts need to start from parent tables How do you ensure that this would not run into a deadlock ?

A: 

ha! it is a lot more complicated than that!

KM
You could possibly elaborate on your answer...
John Saunders
+1  A: 

I analyze all database actions to determine, for each one, if it needs to be in a multiple statement Transaction, and then for each such case, what the minimum Isolation Level is required to prevent deadlocks... As you said Serializable will certainly do so...

Generally, only a very few database actions require multiple statement Tx in the first place, and of those, only a few require serializable isolation to eliminate deadlocks.

For those that do, set the isolation level for that transaction before you begin, and reset it whatever your default is after it commits.

Charles Bretana
Preventing deadlocks is not the prime reason for isolation. It's preventing phantom updates and race conditions.
Walter Mitty
"Phantom reads", and "Missing/double Reads", are prevented by Isolation level Serializable.
Charles Bretana
Each isolation level is there to prevent it's own cahracterisitc set of data inconsistency issues. "Dirty Reads" are prevented by level "Read Committed", "Non-repeatable Reads" are prevented by "RepeatableRead", "Phantom reads", and "Missing/double Reads", are prevented by Serializable level
Charles Bretana
A: 

Your example would only be a problem if the database locked the ENTIRE table. If your database is doing that...run :)

aquinas
I don't think so.This will happen even in case of a row-level lock as long as both the transactions are competing for the lock on the same row.
RN
Well, right. I should have said the chance of it *likely* happening... It's going to be awfully rare that two different transactions are going to want to modify the SAME rows...and in a different order.
aquinas
A: 

I have found that one of the best investments I ever made in avoiding deadlocks was to use a Object Relational Mapper that could order database updates. The exact order is not important, as long as every transaction writes in the same order (and deletes in exactly the reverse order).

The reason that this avoids most deadlocks out of the box is that your operations are always table A first, then table B, then table C (which perhaps depends on table B).

You can achieve a similar result as long as you exercise care in your stored procedures or data layer's access code. The only problem is that it requires great care to do it by hand, whereas a ORM with a Unit of Work concept can automate most cases.

UPDATE: A delete should run forward to verify that everything is the version you expect (you still need record version numbers or timestamps) and then delete backwards once everything verifies. As this should all happen in one transaction, the possibility of something changing out from under you shouldn't exist. The only reason for the ORM doing it backwards is to obey the key requirements, but if you do your check forward, you will have all the locks you need already in hand.

Godeke
I totally agree !But how does an ORM solution handle deletesIf I am updating a row in the parent table in Transaction A- and deleting a row in child table in Transaction B- how will the ORM solution avoid a deadlock ?ORM solution can help in ordering-but delete in database- has an inherentproblem
RN
Updated my answer to include some more about this case.
Godeke
+3  A: 

Deadlocks are no biggie. Just be prepared to retry your transactions on failure.

And keep them short. Short transactions consisting of queries that touch very few records (via the magic of indexing) are ideal to minimize deadlocks - fewer rows are locked, and for a shorter period of time.

You need to know that modern database engines don't lock tables; they lock rows; so deadlocks are a bit less likely.

You can also avoid locking by using MVCC and the CONSISTENT READ transaction isolation level: instead of locking, some threads will just see stale data.

Seun Osewa
I agree to an extent.You should program to recover from deadlocks.And all your suggestions are goodBut at the same time-I recognize that it is good to be consistent in the manner you procure and release locks (And this is for all locks)And I wonder what patterns\strategies help you do that.
RN
If you're using SQL Server 2005 you use "READ COMMITTED SNAPSHOT." This should give you the same semantics as Oracle: writers never block readers.
aquinas
+1  A: 
Karl
A: 
  1. Carefully design your database processes to eliminate as much as possible transactions that involve multiple tables. When I've had database design control there has never been a case of deadlock for which I could not design out the condition that caused it. That's not to say they don't exist and perhaps abound in situations outside my limited experience; but I've had no shortage of opportunities to improve designs causing these kinds of problems. One obvious strategy is to start with a chronological write-only table for insertion of new complete atomic transactions with no interdependencies, and apply their effects in an orderly asynchronous process.

  2. Always use the database default isolation levels and locking settings unless you are absolutely sure what risks they incur, and have proven it by testing. Redesign your process if at all possible first. Then, impose the least increase in protection required to eliminate the risk (and test to prove it.) Don't increase restrictiveness "just in case" - this often leads to unintended consequences, sometimes of the type you intended to avoid.

  3. To repeat the point from another direction, most of what you will read on this and other sites advocating the alteration of database settings to deal with transaction risks and locking problems is misleading and/or false, as demonstrated by how they conflict with each other so regularly. Sadly, especially for SQL Server, I have found no source of documentation that isn't hopelessly confusing and inadequate.

le dorfier
Avoid transactions that involve multiple tables? That's defeating the point of transactions, which is to group related updates together so tables don't go out of sync. -1.
Seun Osewa
Perhaps I wasn't clear. Of course that's the reason for transactions; but if you can modify your dataflow to require fewer dependent table changes, you reduce the chances of locks, yet without tables going out of sync.
le dorfier