views:

272

answers:

1

Dear Gurus

I have seen many articles and Questions/Answers Regarding Lock Escalation but following things are still vague. Please answer to the specific required information.

  1. If Lock Escalation is disabled will there be Intended Locks on higher resources like on Page or Table???

  2. If a Lock(IX) is placed on Page can another connection acquire lock(S) or lock(X) on a ROW within that page.

  3. Does Intended lock behaves same like Shared lock i.e. if Lock(IX) or Lock(IS) is placed on a table we can't insert a new row in the table.

Please help me to understand above scenarios and also i would like to tell you that in my application there will no more than 2 or 3 locks within one table from one connection but my application is multi threaded and may acquire more than 20 connections concurrently so should i disable the lock escalation??? specially if answer of question 1 is "NO".

Thanks and Regards

A: 

To start from the end - and sorry, 20 or more concurrent connections are nothing in terms of load - you should ONLY disable lock escalation if you need it. that is, after determining hat a problem you have is fixed by this. That simple. Anything else is overoptimization - and fighting the intelligence in SQL Server, which does a pretty good job.

1: not that I know of, with some possible rare occasions.

2: to my knowlege not documented. Any answer could change with a hotfix.

3: Intended locks are basically a preparation. They stop allocation of locks that would not allow their intend to materialize. That said - unless your table is minimal... you rarely will se a table level lock at all.

Do you have a specific scenario for yor questions? Basically - this seems, and sorry if that comes through wrong, like a beginner afraid what all the locking does to his application without ever experiencing how a sql server really works beforehand. In general, locks happen as you like them. Sometimes no-locking makes sense, normally using the riht transaction isolation level makes sense, and normally you totally ignore the locking thing details because they do not matter. The ONLY situation you will get into problems are deadlocks, and those are fixed by changing access order.

Besides that, Locks just work and you trust SQL Server to do it's work properly.

TomTom
I would like to share that currently i am running a test application on a 8 processor machine with 8 GM of RAM and using one select query(one connection with select query) total cpu usage of sql server goes to ~13% and with 10 connection to goes to the max.but when it comes to the mixture of select, update, delete and insert the total cpu usage comes down to ~35% only with 20 concurrent connections thats mean there are enough locks on resources those are waisting the remaining cpu usage.
Mubashar Ahmad
So the ultimate goal of my experiment is to adjust the connection limit, locks and other preferences so that there are no deadlocks, no inaccuracy but max cpu utilization so the overall response time of my application becomes better than what its now and level of concurrency as well.
Mubashar Ahmad
No, it does not mean that. Beginner mistake. Selects are READS, which may or may not hit the disc (depends on db size and exact statement here). Update / Inserts mean logging operations (which take time). Depending on your disc layout you may not face any locking contention but simply a dead slow / overloaded disc subsystem that is irrelevant on the query side but starts killing you on the insert/update side.SQL Server 2008 has full tools to see what happens in the server. Use them.
TomTom
In particular, it can tell you whether you are waiting on locks or whether IO is an issue. Also, you may face the fact that 20 threads simlpy are not enough. SELECTS - pulling in a lot of data - can be expensive operations, so a small number of connects can mean a lot of load. Inserts / updates are not so expensive, so your 20 threads may simply spend most of the time doing nothing - and you may need like 200 threads to use up the server.
TomTom
Now, for deadlocks - if yo uget a deadlock you will know. It is not "slow", it is "one connection gets a forced transaction rollback telling it it was choosen as deadlock victim". No inaccuracies are what SQL guarantees, and maxing out the CPU may simply be not possible. OLTP datases (contrary to OLAP) normally are NOT cpu bound but IO bound. This means the amount and layout of your disc subsystem is going to be a lot more a limiting factor.
TomTom
Questions:* Do you have at least 8 temdb files?* Do you have at least 8 logfiles?* Do you have at least 8 database files?(yes, number of concurrent running operations = number of cores - in input heavy scenarios this can limit you as there is a FILE lock while new pages are allocated).* Do you keep tempdb, logs and dtabase on separate spindle groups?
TomTom

related questions