tags:

views:

407

answers:

3

I'm having a major performance issue with LINQ2SQL and transactions. My code does the following using IDE generated LINQ2SQL code:

Run a stored proc checking for an existing record Create the record if it doesn't exist Run a stored proc that wraps its own code in a transaction

When I run the code with no transaction scope, I get 20 iterations per second. As soon as I wrap the code in a transaction scope, it drops to 3-4 iterations per second. I don't understand why the addition of a transaction at the top level reduces the performance by so much. Please help?

Psuedo stored proc with transaction:

begin transaction
update some_table_1;
insert into some_table_2;
commit transaction;

select some, return, values

Pseudo LINQ code without transaction:

var db = new SomeDbContext();
var exists = db.RecordExists(some arguments);

if (!exists) {
    var record = new SomeRecord 
    {
        // Assign property values
    };

    db.RecordsTable.InsertOnSubmit(record);
    db.SubmitChanges();

    var result = db.SomeStoredProcWithTransactions();
}

Pseudo LINQ code with transaction:

var db = new SomeDbContext();
var exists = db.RecordExists(some arguments);

if (!exists) {
    using (var ts = new TransactionScope()) 
    {
        var record = new SomeRecord 
        {
            // Assign property values
        };

        db.RecordsTable.InsertOnSubmit(record);
        db.SubmitChanges();

        var result = db.SomeStoredProcWithTransactions();
        ts.Complete();
    }
}

I know the transaction isn't being escalated to the DTC because I've disabled the DTC. SQL Profiler shows that several of the queries take much longer with the transactionscope enabled, but I'm not sure why. The queries involved are very short lived and I've got indexes that I have verified are being used. I'm unable to determine why the addition of a parent transaction causes so much degredation in performance.

Any ideas?

EDIT:

I've traced the problem to the following query within the final stored procedure:

if exists 
(
    select * from entries where 
        ProfileID = @ProfileID and 
        Created >= @PeriodStart and 
        Created < @PeriodEnd
) set @Exists = 1;

If I had with(nolock) as shown below, the problem disappears.

if exists 
(
    select * from entries with(nolock) where 
        ProfileID = @ProfileID and 
        Created >= @PeriodStart and 
        Created < @PeriodEnd
) set @Exists = 1;

However, I'm concerned that doing so may cause problems down the road. Any advice?

A: 

Does the Stored Procedure you call participate in the ambient (parent) transaction? - that is the question.

It's likely that the Stored Procedure participates in the ambient transaction, which is causing the degredation. There's an MSDN article here discussing how they interrelate.

From the article:

"When a TransactionScope object joins an existing ambient transaction, disposing of the scope object may not end the transaction, unless the scope aborts the transaction. If the ambient transaction was created by a root scope, only when the root scope is disposed of, does Commit get called on the transaction. If the transaction was created manually, the transaction ends when it is either aborted, or committed by its creator."

There's also a serious looking document on nested transactions which looks like it is directly applicable localted on MSDN here.

Note:

"If TransProc is called when a transaction is active, the nested transaction in TransProc is largely ignored, and its INSERT statements are committed or rolled back based on the final action taken for the outer transaction."

I think that explains the difference in performance - it's essentially the cost of maintaining the parent transaction. Kristofer's suggestion may help to reduce the overhead.

RobS
The stored procedure executed within the scope of the TransactionScope object, so I assume it does. In fact, I'm almost certain it does - is there a way to tell it not to (not that I want to)?
Chris
You can query @@TRANSCOUNT to get the current tx count
RobS
A: 

Although you are using a single datacontext, your code sample is likely to use more than one connection and that will escalate your transaction to a distributed transaction.

Try initializing your datacontext with an explicit db connection, or call db.Connection.Open() right after creating the datacontext. That removes the overhead of distributed transactions...

KristoferA - Huagati.com
+1  A: 

One big thing that changes as soon as you get a transaction - the isolation level. Is your database under heavy contention? If so: by default a TransactionScope is at the highest "serializable" isolation level, which involves read locks, key-range locks, etc. If it can't acquire those locks immediately it will slow down while it it blocked. You could investigate by reducing the isolation level of the transaction (via the constructor). For example (but pick your own isolation-level):

using(var tran = new TransactionScope(TransactionScopeOption.Required,
    new TransactionOptions { IsolationLevel = IsolationLevel.Snapshot })) {
    // code
    tran.Complete();
}

However, picking an isolation level is... tricky; serializable is the safest (hence the default). You can also use granular hints (but not via LINQ-to-SQL) such as NOLOCK and UPDLOCK to help control locking of specific tables.


You could also investigate whether the slowdown is due to trying to talk to DTC. Enable DTC and see if it speeds up. The LTM is good, but I've seen composite operations to a single database escalate to DTC before...

Marc Gravell
I wonder what happens if a Stored Procedure's transaction isolation level differes from the parent transaction?
RobS
@Rob - generally the isolation level is per transaction (be it spid, ltm or dtc based). Do you mean if the SP does a "SET TRANSACTION ISOLATION LEVEL" inside the proc? That would be... fun ;-p
Marc Gravell
Yea to the last part - if the proc escallated the isolation level..I've never tried it..
RobS