views:

58

answers:

2

Hello Everybody.

I'm facing a strange issue with some TSQL code on SQL2005.

The piece we suspect is generating the issue is:

INSERT INTO SGVdProcessInfo
 ([StartTs])
 VALUES
 (GETDATE())

SELECT @IdProcessInfo = SCOPE_IDENTITY()

UPDATE TOP(@quantity)
 [SGVdTLogDetail] WITH (ROWLOCK)  
SET 
 [IdSGVdProcessInfo] = @IdProcessInfo
WHERE 
 [IdSGVdProcessInfo] IS NULL
 and IdTLogDetailStatus != 9

@quantity usually takes 500.

There is a non-clustered index over IdSGVdProcessInfo and IdTLogDetailStatus on SGVdTLogDetail

What's happening is that some records of SGVdTLogDetail are first updated with one id of the processinfo table and later they are updated again by another process with a new processinfo ID.

I'm wondering if the rowlock hint is raising this issue or maybe there's something else...

My guess is while the update is being applied over the first 500 selected rows, another process is selecting the next group, and taking some records of the first group which are not yet updated (because of the rowlock). Is this possible?

Any help will be much appreciated!

A: 

I believe this is happening because SQL Server is escalating the row-level locks to page locks. You'd think that an UPDATE in which you specify the primary key would always cause a row lock, but when SQL Server gets a batch with a bunch of these, and some of them happen to be in the same page (depending on this situation, this can be quite likely, e.g. updating all files in a folder, files which were created at pretty much the same time), you'll see page locks, and bad things will happen. And if you don't specify a primary key for an UPDATE or DELETE, there's no reason the database wouldn't assume that a lot won't be affected, so it probably goes right to page locks, and bad things happen.

By specifically requesting row-level locks, these problems are avoided as you are doing, however, in your case lots of rows are affected, and the database is taking the initiative and escalating to page locks.

ajdams
+1  A: 

Yes, that sounds right. You can fix it (at the cost of lost concurrency) by putting the entire operation inside of a serializeable transaction. That will guarantee that all the rows are locked for the life of the transaction, instead of only during the atomic row-level reads and updates.

Charles Bretana