views:

114

answers:

4

Application 1 -

  • Opens a SqlConnection and a SqlTransaction against a SQLServer 2005 database
  • Inserts a record into Table1
  • Does not commit or roll-back the SqlTransaction - intentially keeping this alive to demonstrate / describe the problem

Application 2 -

  • Opens SqlConnection and a SqlTransaction against a SqlServer 2005 database
  • Tries to run this query - "SELECT COUNT(Id) FROM Table1"

Table1 - Id is an Identity field. Name is a varchar field. No other fields in the table

Application 2 is unable to run the "SELECT ..." query. It seems that Table1 is locked or blocked by the insert done in Application 1.

Though the scenario mentioned above is fictional - it demonstrates the problem that we are facing adequately. We want to be able to open a long running SqlTransaction (maybe hours) and do many inserts/updates via that SqlTransaction.

We are developing a data conversion application which has to do a lot of processing on a lot of data before it could be inserted/updated into the database. The data conversion is to happen while we have our main WebForms based application running against the same SQLServer 2005 database in which we want to perform the long running transaction.

All the tables in our application are segmented by a ClientID field from a ClientMaster table. For example if we have a CollegeMaster table, then it would have a ClientID field as a part of the primary key and a ID field for its own identification. The data conversion starts by creating a new ClientID and that new ClientID field is used in all other tables.

Ideally all queries like the one mentioned in Application 2 should not be affected by the long running transaction. Those queries should only read / use data that is already commited and continue to work rather than get blocked due to the long running transaction. What can Application 1 do to ensure that this is achieved?

+1  A: 

I recommend not having long running transactions; however, with that said:

You can lower the transaction isolation level by using hints. I typically do not recommend this practice but if your selects were to do:

select count(id) from Table1 (NOLOCK) you would essentially by pass all locks on the table; however, be warned you will can and will end up with dirty reads, phantom reads (Where the data is there one minute but not the next). If your queries are trully segmented then you should be ok. There are also other hints you can look at in Books online.

Aother option is to do all your long running processing in staging tables, then do one final copy / insert into table1. This will help keep the length of transaction down.

JoshBerke
A: 

The reasoning behind the long running transaction for data conversion is that SQLServer 2005 already has roll-back facility. In case there are problems with data conversion, we can use that facility to roll-back the inserted/updated data.

The reasoning against staging tables is that we have a lot of identity fields which in a concurrent situation will be difficult to keep track of. While doing conversion from actual source tables to staging tables, we will generate one set of identity values in "master" tables which would be used further in "children" staging tables. Next, while pushing the data from staging tables to target tables, we will have to make sure that the new identity values that get generated for "master" tables are mapped and propagated properly to "children" tables.

Dhwanil Shah
The reasoning against long-running transactions is that you can lock your DB and make your data inaccessible. Reading it through the lock risks ACID compliance. The reasoning behind staging tables is that you can use your database like a database.
Greg D
I agree with GregD here.
JoshBerke
Also you could change your PK and use something more portable like a Guid...You could use a Guid in your staging tables, and then when you copy to your live db switch to the identity...
JoshBerke
+1  A: 

You may want to look into this Sql 2005 feature. Sounds like it may help you. Its a newer locking mechanism you have to enable in the DB, but apparently has much less blocking.

http://msdn.microsoft.com/en-us/library/ms177404(SQL.90).aspx

Ben Dempsey
Thanks! Running these two statements do allow me to get the behaviour I need from SQLServer 2005. I still need to explore the implications of the same though.... ALTER DATABASE AdventureWorks SET READ_COMMITTED_SNAPSHOT ON;ALTER DATABASE AdventureWorks SET ALLOW_SNAPSHOT_ISOLATION ON;
Dhwanil Shah
A: 

Why not store all changes in a DataSet and Commit all at once? Would this not solve the long running transaction issue?

Charles Gardner