views:

93

answers:

5

i need execute un update statement over an sql server table, this table is used by another process at the same time. because that sometimes deadlocks ocurs. wich Isolation Level do you recomend to avoid or minimize this deadlocks?

+1  A: 

Look into snapshot isolation - using this level of isolation is a good compromise between consistency and speed. I might be shot down in flames for saying this, however I believe that deadlocks are much more difficult to encounter at this isolation level.

Whether this is the right thing to do to get around your deadlock situation is another matter entirely.

Will A
Deadlocks are more difficult to encounter, but you'll hit a tonne of UPDATE rollbacks if you get conflicts!
Jeremy Smyth
+3  A: 
READ UNCOMMITTED

But that allows other processes to read the data before the transaction has committed, what is known as a dirty read.

You may prefer to turn on row versioning, the update creates a new version of the row and any other select statements use the old version until this one has committed. To do this turn on READ_COMMITTED_SNAPSHOT mode. There is more info here. There is an overhead involved maintaining the versions of the rows but it removes UPDATE/SELECT deadlocks.

Chris Diver
A: 

Are you using WITH (NOLOCK)?

Carnotaurus
A: 

Use a cursor or a loop to update small numbers of rows in a batch, this avoids SQL Server escalting to a table lock.

SPE109
A: 

The suggestions to use READ UNCOMMITTED here are ok, but they really side-step the issue of why you're getting a deadlock in the first place. If you don't care about dirty reads then that's fine, but if you need to benefits of isolation (consistency, etc) then I recommend figuring out a proper locking strategy in your application.

I don't have the answer for you on that one - I've been working out some strategies on that myself. See the comments of this question for some discussion.

uosɐſ