views:

537

answers:

4

A DBA that my company hired to troubleshoot deadlock issues just told me that our OLTP databases locking problems will improve if we set the transaction level to READ COMMITTED from READ UNCOMMITTED.

Isn't that just 100% false? READ COMMITTED will cause more locks, correct?


More Details:

Our data is very "siloed" and user specific. 99.9999999 % of all user interactions work with your own data and our dirty read scenarios, if they happen, can barely effect what the user is trying to do.


Thanks for all the answers, the dba in question ended up being useless, and we fixed the locking issues by adding a single index.


I regret that I didn't specify the locking problems were occurring for update statements and not regular selects. From my googing the two different query types have distinct solutions when dealing with locking issues.

+1  A: 

Doesn't it depend on what your problem is: for example if your problem is a deadlock, mightn't an increase in the locking level cause an earlier acquisition of locks and therefore a decreased possibility of deadly embrace?

ChrisW
+4  A: 

Hi,

That does sound like a bit of a rash decision, however without all the details of your environment it is difficult to say.

You should advise your DBA to consider the use of SQL Server's advanced isolation features, i.e. the use of Row Versioning techniques. This was introduced to SQL Server 2005 to specifically address issues with OLTP database that experience high locking.

The following white paper contains quite complicated subject matter but it is a must read for all exceptional DBA's. It includes example of how to use each of the additional isolation levels, in different types of environments i.e. OLTP, Offloaded Reporting Environment etc.

http://msdn.microsoft.com/en-us/library/ms345124.aspx

In summary it would be both foolish and rash to modify the transaction isolation for all of your T-SQL queries without first developing a solid understanding of how the excessive locking is occuring within your environment.

I hope this helps but please let me know if you require further clarification.

Cheers!

John Sansom
"foolish and rash to modify the transaction isolation .."You said it better than I did. (+1)
RussellH
A: 

This sounds scary. Do you really want to just change these parameters to avoid deadlocks? Maybe the data needs to be locked?

That said, it could be that the DBA is referring to the new (as of SQL Server 2005) READ COMMITTED SNAPSHOT that uses row versioning and can eliminate some kinds of deadlocks.

http://www.databasejournal.com/features/mssql/article.php/3566746/Controlling-Transactions-and-Locks-Part-5-SQL-2005-Snapshots.htm

RussellH
+1  A: 

It the data is siloed and you are still getting deadlocks then you may simply need to add rowlock hints to the queries that are causing the problem so that the locks are taken at the row level and not the page level (which is the default).

READ UNCOMMITTED will reduce the number of locks if you are locking data because of SELECT statements. If you are locking data because of INSERT, UPDATE and DELETE statements then changing the isolation level to READ UNCOMMITTED won't do anything for you. READ UNCOMMITTED has the same effect as adding WITH (NOLOCK) to your queries.

mrdenny
+1 For qualifying the different options for a resolution dependent on the type of activity being performed.
John Sansom