views:

644

answers:

2

I'm part of a team building an ADO.NET based web-site. We sometimes have several developers and an automated testing tool working simultaneously a development copy of the database.

We use snapshot isolation level, which, to the best of my knowledge, uses optimistic concurrency: rather than locking, it hopes for the best and throws an exception if you try to commit a transaction if the affected rows have been altered by another party during the transaction.

To use snapshot isolation level we use:

ALTER DATABASE <database name>
SET ALLOW_SNAPSHOT_ISOLATION ON;

and in C#:

Transaction = SqlConnection.BeginTransaction(IsolationLevel.Snapshot);

Note that IsolationLevel Snapshot isn't the same as ReadCommitted Snapshot, which we've also tried, but are not currently using.

When one of the developers enters debug mode and pauses the .NET app, they will hold a connection with an active transaction while debugging. Now, I'd expect this not to be a problem - after all, all transactions are using snapshot isolation level, so while one transaction is paused, other transactions should be able to proceed normally since the paused transaction isn't holding any locks. Of course, when the paused transaction completes, it is likely to detect a conflict; but that's acceptable so long as other developers and the automated tests can proceed unhindered.

However, in practice, when one person halts a transaction while debugging, all other DB users attempting to access the same rows are blocked despite using snapshot isolation level.

Does anybody know why this occurs, and/or how I can achieve true optimistic (non-blocking) concurrency?

The resolution (an unfortunate one for me): Remus Rusanu noted that writers always block other writers; this is backed up by MSDN - it doesn't quite come out and say so, but only ever mentions avoiding reader-writer locks. In short, the behavior I want isn't implemented in SQL Server.

+1  A: 

Have you looked at the locks when one developer pauses the transaction? Also, just turning on snapshot isolation level does not have much effect. Have you set ALLOW_SNAPSHOT_ISOLATION ON?

Here are the steps:

ALTER DATABASE <databasename>
SET READ_COMMITTED_SNAPSHOT ON;
GO

ALTER DATABASE <database name>
SET ALLOW_SNAPSHOT_ISOLATION ON;
GO

After the database has been enabled for snapshot isolation, developers and users must then request that their transactions be run in this snapshot mode. This must be done before starting a transaction, either by a client-side directive on the ADO.NET transaction object or within their Transact-SQL query by using the following statement:

SET TRANSACTION ISOLATION LEVEL SNAPSHOT

Raj

Raj
The database has allow_snapshot_isolation on. Failure to do so results in an immediate exception; you cannot set the isolationlevel to snapshot when allow_snapshot_isolation is off (i.e. setting the isolation level does not fail silently).I'll update the post to reflect the exact commands needed, thanks!
Eamon Nerbonne
+5  A: 

SNAPSHOT isolation level affects, like all isolation levels, only reads. Writes are still blocking each other. If you believe that what you see are read blocks, then you should investigate further and check out the resource types and resource names on which blocking occurs (wait_type and wait_resource in sys.dm_exec_requests).

I wouldn't advise in making code changes in order to support a scenario that involves developers staring at debugger for minutes on end. If you believe that this scenario can repeat in production (ie. client hangs) then is a different story. To achieve what you want you must minimize writes and perform all writes at the end of transaction, in one single call that commits before return. This way no client can hold X locks for a long time (cannot hang while holding X locks). In practice this is pretty hard to pull off and requires a lot of discipline on the part of developers in how they write the data access code.

Remus Rusanu
This behaviour is unlikely in production. On the other hand, the development version is quite important, and I'd be quite willing to make code changes if that smooths out development.Unfortunately, a "svn-esque" style of locking (hope for the best and simply fail on conflict) doesn't seem to be implemented.The ability to have long-running and complex transactions without blocking all small transactions would still be useful - as is, we use less transactions that ideal just to avoid blocking.
Eamon Nerbonne
There is no silver bullet. But if you find urself often blocked in write vs. write you should consider why is this occurring, why different 'requests' cause updates of the same data. Perhaps you can partition your application better, lower the probability of overlap. Perhaps some updates can be deferred, enqueued in a work table during the user transaction (enque/dequeue can be made block free, with care) and later processed by dedicated batch processes. Also ensure all transaction only lock the minimum needed (no page/table locks, no escalation, no futile table scans).
Remus Rusanu