views:

76

answers:

3

Hi,

Here's my situation (SQL Server):

I have a web application that utilizes nHibernate for data access, and another 3 desktop applications. All access the same database, and are likely to utilize the same tables at any one time.

Now, with the help of NH I'm batching selects in order to load an aggregate with all of its hierarchy - so I would see 4 to maybe 7 selects being issued at once (not sure if it matters).

Every few days one of the applications will get a : "Transaction has been chosen as the deadlock victim." (this usually appears on a select)

I tried changing to snapshot isolation on the database , but that didn't helped - I was ending up with :

Snapshot isolation transaction aborted due to update conflict. You cannot use snapshot isolation to access table '...' directly or indirectly in database '...' to update, delete, or insert the row that has been modified or deleted by another transaction. Retry the transaction or change the isolation level for the update/delete statement.

What suggestions to you have for this situation ? What should I try, or what should I read in order to find a solution ?

EDIT:

Actually there's no raid in there :). The number of users per day is small (I'll say 100 per day - with hundreds of small orders on a busy day), the database is a bit bigger at about 2GB and growing faster every day.

It's a business app, that handles orders, emails, reports, invoices and stuff like that.

Lazy loading would not be an option in this case.

I guess taking a very close looks at those indexes is my best bet.

+1  A: 

Is your hardware properly configured (specifically RAID configuration)? Is it capable of matching your workload?

If hardware is all good and humming, you should ensure you have the 'right' indexes to match your query workload.

Many locking/deadlock problems can be eliminated with the correct indexes (covering indexes can take pressure off the clustered index during inserts).

BTW: turning on snapshot isolation will put increased pressure on your tempDB. How is tempDB configured? RAID 0 is preferred (and even better use an SSD if tempDB is a bottleneck).

Mitch Wheat
A: 

While it's not uncommon to find this error in NHibernate sessions with large numbers of users, it seems to be happening too often in your case.

Perhaps your objects are very large resulting in long-running selects? And if your selects are taking too long, that might indicate problems with your indexes (as Mitch Wheat explains)

If everything is in order, you could also try Lazy Loading to postpone your selects until when you really need your data. This might not be appropriate for your exact situation so you do have to see if it works.

Jon Limjap
+1  A: 

Deadlocks are complicated. A deadlock means that at least two sessions have locks and are waiting for one another to release a different lock; since both are waiting, the locks never get released, neither session can continue, and a deadlock occurs.

In other words, A has lock X, B has lock Y, now A wants Y and B wants X. Neither will give up the lock they have until they are finished with their transaction. Both will wait indefinitely until they get the other lock. SQL Server sees that this is happening and kills one of the transactions in order to prevent the deadlock. Snapshot isolation won't help you - the DB still needs to preserve atomicity of transactions.

There is no simple answer anyone can give as to why a deadlock would be occurring. You'll need to profile your application to find out.

Start here: How to debug SQL deadlocks. That's a good intro.

Next, look at Detecting and Ending Deadlocks on MSDN. That will give you a lot of good background information on why deadlocks occur, and help you understand what you're looking at/for.

There are also some previous SO questions that you might want to look at:

Or, if the deadlocks are very infrequent, just write some exception-handling code into your application to retry the transaction if a deadlock occurs. Sometimes it can be extremely hard (if not nearly impossible) to prevent certain deadlocks. As long as you write transactionally-safe code, it's not the end of the world; it's completely safe to just try the transaction again.

Aaronaught
All true, but I doubt it's a 'real' deadlock...
Mitch Wheat
What exactly does a REAL deadlock mean ?
sirrocco
@sirrocco: Probably what Mitch is referring to is a deadlock that's caused by lock contention in complex transactions, whereas it's possible that what's happening here is just that the I/O system just can't keep up. Still, for a database that's only 2 GB, cheap hardware should be OK. Either way, profiling will help you track it down.
Aaronaught