views:

1015

answers:

4

i am updating/inserting/deleting values on more than 1 database and want to implement a transaction.

currently its done like

try{
db[1].begintransaction();
db[1].ExecuteNonQuery();

db[2].begintransaction();
db[2].ExecuteNonQuery();

...

db[N].begintransaction();
db[N].ExecuteNonQuery();

// will execute only if no exception raised during the process
for (int a = 0; a < N; a++)
{
    db[a].Commit();// what if there is an error/exception here
}
}
catch {
for (int a = 0; a < N; a++)
{
    db[a].RollBack();
}
}

problem is it would fail horribly when there is an exception during commit (see the comment). any better idea?

+3  A: 

use the TransactionScope class like this

using(TransactionScope ts = new TransactionScope())
{
  //all db code here

  // if error occurs jump out of the using block and it will dispose and rollback

  ts.Complete();
}

The class will automatically convert to a distributed transaction if necessary

keithwarren7
A: 

As cletus said, you need some kind of two-phase commit. As the article states, this doesn't always work in practice. If you need a robust solution, you must find a way to serialize the transactions in such a way that you can do them one after the other and roll them back individually.

Since this depends on the specific case on which you don't provide any details, I can't give you ideas how to attack this.

Aaron Digulla
A: 

If you wish to execute transaction across multiple instances of SQL Server then take a look at the Microsoft Distributed Transaction Coordinator documentation

John Sansom
A: 

Using transactionScope is the answer. It even works with different DBMS!!!

Transactions over multiple databases

despart