views:

34

answers:

1

The current implementation of TransactionScope lacks the ability to change IsolationLevels in nested scopes.

MSDN states: When using nested TransactionScope objects, all nested scopes must be configured to use exactly the same isolation level if they want to join the ambient transaction. If a nested TransactionScope object tries to join the ambient transaction yet it specifies a different isolation level, an ArgumentException is thrown.

However SQL Server allows us to change Isolation Levels at any times we please, why doesn't the TransactionScope allow? I don't get it.

Are there any standards in the BCL about nested SQL Transactions and their isolation levels that prohibits this behavior. What are my options? I certainly cannot design class libraries and promote Isolation Levels along with them just for the sake that they can be used.

If method A() wants a Snapshot level and calls method B() which wants Read Committed level. Method A() is in LibraryA which is developed by me, method B() is in LibraryB which is developed by a fictitious company. How can A() call B() without getting ArgumentException?

A: 

This is more a question of theory than of any particular product. Basically, the original notion of a transaction is something that has all the ACID properties: Atomic, Consistent, Isolated, and Durable.

http://databases.about.com/od/specificproducts/a/acid.htm

Now, consider what is meant by "isolation level": essentially, for performance or other reasons, we may choose to forego some or all of the guarantees that the database makes concerning ACIDity. Isolation and atomicity are intimately related, like a dual of each other. Break one, and the other suffers.

It's common to break a transaction into the parts that are individually expressible in particular SQL statements, but for your guarantees to help at all, you need to wrap them up in a transaction with at least enough isolation for the whole shmear to work properly.

Now, if some fraction of your transaction requires a greater degree of isolation, then the whole transaction does also, or else it could break during the sensitive part. Conversely, if some part is given a lower level of isolation, then it could well be that the proper function of that part requires less isolation. (Don't ask me to come up with a good example of when this could be true.)

Anyway, the database server has no way of telling if the actual requirements are compatible, so it gives up on the problem altogether.

What's really supposed to happen is that people develop the transactions they need. I realize this isn't always easy, but in light of the fact that your business data schema is yours alone, it's not supposed to be easy to plug random SQL components together.

In the case of dealing with centralized databases (like SQL Server) the best practice is to design entire transactions, and then as you notice commonality among the designs, you can factor it out possibly before you write hairy code.

If you really need to coordinate among different data repositories (or similar) then a distributed transaction manager is called for. It's a separate product, and even more difficult to get right than a database server. But that's necessary for things like ATMs which either give you money or don't, and which either hit your bank account or don't, and those two have to match.

Good luck!

Ian
Thanks for your reply. But what I am missing is why if the SQL Server allows changing the isolation level of a transaction, then the Framework prohibits it? Sometimes you might really need this for some sub statement with optimization purposes, and I would really not like to change the level with SQL statement myself.
Ivan Zlatanov