views:

74

answers:

4

If multiple queries to one and the same large table come to SQL Server at the same time, how SQL Server resolves conflicts in this case?

When using LINQ to SQL queries in WPF application for install on many PC's, working in network with SQL Server database, should I somehow implement mechanism for resolving such issues or am I protected from issues of this kind by SQL Server and LINQ to SQL?

(I don't mean concurrent operations to the same record, which resolved by using TIMESTAMP field. I mean some queue issue, when the same large table is queried from multiple network stations)

+3  A: 

This is behavior specified by the implementation of SQL Server and not something you would want to mess with even if you could.

Edit: I was not aware of SET DEADLOCK_PRIORITY when I wrote this, but I think I can stand by my "leave it alone" recommendation.

Daniel Coffman
+3  A: 

If your queries cause a deadlock, SQL Server has an internal mechanism to deal with it.

If it detects a deadlock situation, it choose a "victim" transaction to be rolled back. The way it chooses the victim is based first on the priority setting of the transaction, and then on the total cost of rolling back each query. The one that costs less to rollback is usually the victim.

You can actually use the SET DEADLOCK_PRIORITY to control the priority (LOW, MEDIUM or HIGH, or a number between -10 and 10). If you find you're in this situation though, you should really be putting your effort into reducing deadlocks in the first place.

womp
+3  A: 

SQL Server is going to manage the connections and the query execution for each of your connections, you are not going to have control of how, when, who gets to query when.

You can control the time the query takes by making sure that you have proper indexes, but that is as far as your concern will go.

Mitchel Sellers
+2  A: 

Databases use isolation levels to resolve the issues you're asking about.

Isolation levels come into play when you need to isolate a resource for a transaction and protect that resource from other transactions. The protection is done by obtaining locks. What locks need to be set and how it has to be established for the transaction is determined by SQL Server referring to the Isolation Level that has been set. Lower Isolation Levels allow multiple users to access the resource simultaneously (concurrency) but they may introduce concurrency related problems such as dirty-reads and data inaccuracy. Higher Isolation Levels eliminate concurrency related problems and increase the data accuracy but they may introduce blocking.

For more info on SQL Server 2005 isolation levels, read this article.

You can tweak isolation levels, even on a per query basis. But it's advanced principle that can really cause problems.

OMG Ponies
Thanks, it's useful info. +1
rem