views:

1223

answers:

3

Just read this interesting article by Omar on his blog Linq to SQL solve Transaction deadlock and Query timeout problem using uncommitted reads and at the end Javed Hasan started arguing with him about his solution to the nolock situation on a high volume site.

Here, the problem trying to solve is, from the sql sense we need to use Select statements with NOLOCK or use SET TRANSACTION LEVEL READ UNCOMMITTED, otherwise at high volume rows in DB will be locked and cause errors. The technology Omar used is Linq2Sql, so the question is how do we get this achieved in your C# data access code so the above does not happen?

Basically in the post, Omar comes to his solution by working and testing on real world site and with tools like SqlProfiler, whereas Javed Hasan comes to his solution with MSDN documents and Scott Hanselman's blog post etc.

Omar suggests using the following

using (var db = new DropthingsDataContext2())
{
  db.Connection.Open();
  db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;");

  var user = db.aspnet_Users.First();
  var pages = user.Pages.ToList();
}

whereas Javed Hasan suggests

using (new TransactionScope(TransactionScopeOption.Required, 
  new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
 //Your db Operation
}

I'm very interested to know what you guys out there do on this particular issue on a high volume site like the StatckOverflow, or what did Jeff and their guys do in this regard?

Edit: After reading the first post, I want to point out a few things in Omar's post.

  1. he did run into connection problem with his approach but he solved it, see his post.
  2. more importantly he mentioned he tried the using ADO.NET Transaction way and even tried what Scott Hanselman wrote on his blog, but it is not working for high volume site, it degrades the performance quite a bit. Omar said this "System.Transactions has significant overhead. I have never been able to use it on a high volume website without making CPU go 100% and Req/sec come down to 1/10th. It's made for enterprise applications, not for high volume websites."
+4  A: 

First of all please avoid uncommitted reads, they can cause lots of issues. A much better approach is just to set the database to snapshot isolation. This is what Jeff did.

Jeff basically said: "bla bla bla, be real, bla bla bla, database theoreticians, bla bla bla, READ UNCOMMITTED can be useful for REAL production apps that don't need data consistency." Jeff is not a DBA, fortunately there are many DBAs out here on SO.

The problem with Omar's approach is that it can leak connections with "read uncommitted" isolation level in to your connections pool which could wreak havoc in your website. Meaning random statement may be executed in read uncommitted.

Javed approach would be much better because on dispose MS have the chance to clean stuff up on the connection.

EDIT If you are having performance issues with Javed's approach you could look at rolling your own transaction manager.

Some things you probably want to do:

  • Hold a stack of current transactions
  • Confirm you are on the creator thread when a transaction is committed
  • Reset the transaction isolation to its previous state on dispose
  • Rollback on dispose if the transaction was not committed.
  • Support nested rollbacks.
Sam Saffron
Thanks for the answer, I appended to my post above, just to be a bit more clear. Thanks!
ray247
A: 

{My (poor) reputation prevents me from posting comments so I put this as an answer}

If you use IsolationLevel via System.Transactions and create a new Linq context within the transaction block, SQL Server ends up trying to call DTC to coordinate the transaction. That just happened to me and was quite unexpected.

Federico González
Are you sure about that? I think that's only true when you're either using SQL 2000 or you have a linked server, or you have more than one connection in your scope.
Dave Markle
@Dave, I'm sure about that. I just happened to me some time ago, using SQL Server 2005 with no linked servers.
Federico González
I always create the Transaction after creating the Linq context, never the other way around.
RyanHennig
+1  A: 

I'm a developer on a tools team in the SQL Server group at Microsoft. Many applications are not super-sensitive to transaction consistency, especially if you writing an app which does reporting or something where occasionally inconsistent data is not the end of the world. Of course, if you writing a financial application or something else which has very low tolerance for data inconsistency, you probably want to explore other solutions.

If do choose to use uncommitted reads, I have blogged a handy solution using extension methods in C#.

RyanHennig