views:

175

answers:

3

We have an application that was originally written as a desktop app, lo these many years ago. It starts a transaction whenever you open an edit screen, and commits if you click OK, or rolls back if you click Cancel. This worked okay for a desktop app, but now we're trying to move to ADO.NET and SQL Server, and the long-running transactions are problematic.

I found that we'll have a problem when multiple users are all trying to edit (different subsets of) the same table at the same time. In our old database, each user's transaction would acquire record-level locks to every record they modified during their transaction; since different users were editing different records, everyone gets their own locks and everything works. But in SQL Server, as soon as one user edits a record inside a transaction, SQL Server appears to get a lock on the entire table. When a second user tries to edit a different record in the same table, the second user's app simply locks up, because the SqlConnection blocks until the first user either commits or rolls back.

I'm aware that long-running transactions are bad, and I know that the best solution would be to change these screens so that they no longer keep transactions open for a long time. But since that would mean some invasive and risky changes, I also want to research whether there's a way to get this code up and running as-is, just so I know what my options are.

How can I get two different users' transactions in SQL Server to lock individual records instead of the entire table?

Here's a quick-and-dirty console app that illustrates the issue. I've created a database called "test1", with one table called "Values" that just has ID (int) and Value (nvarchar) columns. If you run the app, it asks for an ID to modify, starts a transaction, modifies that record, and then leaves the transaction open until you press ENTER. I want to be able to

  1. start the program and tell it to update ID 1;
  2. let it get its transaction and modify the record;
  3. start a second copy of the program and tell it to update ID 2;
  4. have it able to update (and commit) while the first app's transaction is still open.

Currently it freezes at step 4, until I go back to the first copy of the app and close it or press ENTER so it commits. The call to command.ExecuteNonQuery blocks until the first connection is closed.

public static void Main()
{
    Console.Write("ID to update: ");
    var id = int.Parse(Console.ReadLine());
    Console.WriteLine("Starting transaction");
    using (var scope = new TransactionScope())
    using (var connection = new SqlConnection(@"Data Source=localhost\sqlexpress;Initial Catalog=test1;Integrated Security=True"))
    {
        connection.Open();
        var command = connection.CreateCommand();
        command.CommandText = "UPDATE [Values] SET Value = 'Value' WHERE ID = " + id;
        Console.WriteLine("Updating record");
        command.ExecuteNonQuery();
        Console.Write("Press ENTER to end transaction: ");
        Console.ReadLine();
        scope.Complete();
    }
}

Here are some things I've already tried, with no change in behavior:

  • Changing the transaction isolation level to "read uncommitted"
  • Specifying a "WITH (ROWLOCK)" on the UPDATE statement
+1  A: 

Look into optimistic versus pessimistic locking.

Edit: Previous article linked to classic ado...sorry.

http://msdn.microsoft.com/en-us/library/cs6hb8k4(VS.71).aspx

Jeremy
That page says "If your underlying data source supports transactions, you can simulate pessimistic concurrency by updating your data within a transaction." So it looks like its discussion of optimistic concurrency only applies when you're *not* using transactions.
Joe White
Does this one help?http://articles.techrepublic.com.com/5100-10878_11-1049842.htmlIt talks about creating the transaction itself with a "Repeatable Read" option set.
Jeremy
Also check outhttp://www.sql-server-performance.com/articles/per/new_isolation_levels_p1.aspxfor the different isolation levels available in sql 2005 and above.I believe the default isolation level changed between sql 2000 and 2005, so applications upgraded from one to the other had some nasty surprises when the server got busy.
Jeremy
+3  A: 

Just checking, but do you have a primary key or unique index on the ID column?

Neil Moss
Turns out I didn't; I had made ID an autonumber field, but I hadn't made it a primary key. When I added the primary key, it got the record-level locks. I have no idea why a primary key should make any difference, but apparently it does. Thanks!
Joe White
You're welcome. I'm sure there's a more DBA way of saying this, but AIUI, it means you avoid a table scan when looking for the target record, which is what locked the table.
Neil Moss
+1  A: 

Probably the index was created with row locks set to "off".
"WITH (ROWLOCK)" in a query would have no effect in that case.

You can turn them back on with ALTER INDEX, e.g.:

ALTER INDEX [PK_Values] ON [Values] SET (ALLOW_ROW_LOCKS = ON)
andras