views:

57

answers:

5

I am using SSIS 2008 to execute multiple stored procedures in parallel in control flow. Each SP is supposed to ultimately update 1 row in a table.

The point to note is that each SP has a defined responsibility to update specific columns only.

It is guaranteed that the different SPs will not update each other's columns. So the columns to be updated are divided between the various SPs, but as per design, each SP is supposed to work on the same row ultimately.

At the moment some of my SPs error out due to deadlock. I am guessing this may be because of the lock on that row by other SPs?

How can i work this out?

A: 

Simple: make suere you do not leave locks anywhere. This works with the transaction isolation determined by the connection. With theproper transaction isaolation, there will be no locks, so no deadlocks.

The update is not the problem. It starts with READS. Go to READ UNCOMMITED to make sure you do not leave locks upon read, and / or use the NOLOCK option in the SELECT statements to individually force them to leave NO LOCKS in place wile reading data (more advisable). If you then make sure that the SP will commit pretty much immediately after the insert (internally or externally) there should be no deadlock possible - a write lock will result in the next SP waiting for the first Transaction to commit.

Especially when going down to one table / one row only, a deadlock is not possible with update statements only, IIRC. It is the read locks which turn a delaying lock (albeit small delay) into a deadlock.

http://www.eggheadcafe.com/software/aspnet/30976898/how-does-update-lock-prevent-deadlocks.aspx has some nice example of a deadlock. So, if everything BUT the update has no locks, then at the end a deadlock is not possible.

TomTom
I think it's kinda difficult to avoid locks when you are updating data.
bobs
I read up to find that 4 locks are supported READ UNCOMMITTEDREAD COMMITTEDREPEATABLE READSERIALIZABLEI am not sure yet, which one should I use in my SP which does the UPDATE to resolve my issue.
Saurabh Kumar
I added a detailed descriptions.
TomTom
Next time, I'd suggest warning that NOLOCK can have undesirable effects:e.g. http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/11/10/1280.aspx
Patrick Marchand
Given the specific scenario given I think such a warning is implicit. The user pointed out specifically that updates only happen to one table /row, different fields.
TomTom
A: 

Row level locking is as low as you can safely go, so I think you will have to break up the updated table into several tables that you can update independent of each other.

KMW
-1 for being clueles about the standard mechanisms to avoid low level locks.
TomTom
+1  A: 

The deadlocks are probably caused by more than just having another SP locking the row. Under that situation the first procedure would just wait until the locking SP releases the lock. That's not to say that your multiple procedures are not causing the problem. There's more to it though.

You may have to do some rework to avoid the problem, but first you should find out more about the deadlock situation. I suspect that you have locks on objects other than the row that's being updated.

There are ways to gather more information on the deadlock. Here's a link where you can learn about the deadlock details.

bobs
A: 

No, two sessions cannot update the same row simultaneously. Row level locking is as low as it gets so when one session is updating a row, other sessions wanting to update that record will wait.

As for the deadlocks, the trouble with SQL Server is that by default SELECT statements will block if it's asking for a record currently being updated. You can use WITH (NOLOCK) if you don't mind reading uncommitted data.

So, if you've got this order of events: SessionA

begin transaction

update t1
set c1 = 'x'
where c2 = 5

SessionB

begin transaction

update t2 
set c1 = 'y'
where c2 = 7

SessionA

select * from t1 where c2 = 5
<waits on SessionB>

SessionB
select * from t2 where c2 = 7
<waits on SessionA>...oops. Deadlock.

The trick is to only lock something for as little time as is necessary (don't break up a series of statements just to release a lock - make sure that steps that make up a logical transaction stay as a transaction):

begin transaction

update t1
set c1 = 'x'
where c2 = 5

commit

Or (caveat emptor) use the nolock directive:

SessionA

select c1 from t1 where c2 = 5 with (nolock)
<gets the new value of 'x'>

SessionB
select * from t2 where c2 = 7 with (nolock)
<gets the new value of 'y'>
Patrick Marchand
-1 for obviously being clueless about the ways to avoid row locking.
TomTom
Good call. I wasn't trying to avoid row locking because for me, keeping data integrity is most important.
Patrick Marchand
+1  A: 

You have to admit, this seems like a highly unusual thing to do. I wonder if it wouldn't be better to update separate tables, and then have a single update statement at the end that would join the individual tables to the final one? (i.e. update a set a.[1] = ... from a inner join b inner join c etc.).

But, if you want to continue down this path, then just set READ UNCOMMITTED from within each of your stored procedures. That is the best bet.

sfuqua
I guess then, in the long run, I would propose to our team that we store the outputs of the Sp in different tables and then unite it later, even though this will mean a higher IO traffic due to temporary read/write to new tables by each SP.
Saurabh Kumar