views:

331

answers:

5

I've run into an issue that I need clearer heads to think through. Occasionally this stored procedure (and many others similar to it):

CREATE PROC [dbo].[add_address1]
 @recno int OUTPUT,
 @clientID int,
 @street varchar (23),
 @city varchar (21),
 @state varchar (2),
 @zip varchar (9)
AS
 declare @s int;
 select @s = updated from is_clientindex where clientid = @clientID
 insert into is_address1
  (original_rec, clientID, street,city,state,zip)
  values (@s + 1, @clientID, @street,@city,@state,@zip);
 set @recno = @@IDENTITY;

Will attempt to insert null into original_rec, a column that doesn't allow nulls. The table is_clientindex is a very busy table with lots of reads going on. Inserting or updating is rare.

I think what's happening is that is_clientindex is locked, or in some other way unavailable. This causes the select to fail, eventually leading to the insert failing.

Does it sound like I'm on the right track?

Is there anything I should do to is_clientindex to help this locking issue? The table/database would have been created using the SQL Server 2005 defaults for locking.

Is there anything I should do to this stored procedure?

Unfortunately, I do need to check that updated flag in is_clientindex when inserting into this table. There's no way around that.

Edits:

  • @ClientID is valid (we know this through debugging), and the is_clientindex table is foreign-keyed to everything else in the system so we know it didn't vanish.

  • This only happens under heavy load with multiple users.

  • The Activity Monitor shows lots of PAGE locks, but I don't know on what because the Object ID doesn't correspond to anything in the sys.all_objects table.

+2  A: 

Locking has nothing to do with this.

SELECTs place only shared locks on the table, which are compatible with each other.

If you don't issue DML against your table, the SELECT won't ever block each other.

Even if the lock contention occurs, SELECT will just wait for the concurrent lock to be released, and only if @@LOCK_TIMEOUT is set to a value greater than 0 and the timeout occurs, the SELECT will fail.

In this case it will report an error.

Your SELECT query will set @s to a NULL if there is no record with the given @ClientID in is_clientindex, or the value of updated is NULL.

Change your INSERT query into this:

INSERT
INTO    is_address1 (original_rec, clientID, street,city,state,zip)
SELECT  updated + 1, @clientID, @street, @city, @state, @zip
FROM    is_clientindex
WHERE   clientid = @clientID
        AND updated IS NOT NULL
Quassnoi
Actually, I know this is a possibility. However the way the rest of the system is designed the record *is* there. We know through debugging that the @ClientID is correct, and it's foreign-keyed everywhere so we know the record hasn't vanished.This only happens under heavy load.
clintp
Is it possible that the value of `updated` is a `NULL`?
Quassnoi
@Quassnoi: I actually had to go look this one up. Nope, it'a "not null" column. Good idea though.
clintp
`@clintp`: how do you know that the `@ClientID` is valid? What debugging technique do you use?
Quassnoi
+1  A: 

Could it simply be that is_clientindex table doesn't have a record for the given clientid (where does the @clientID argument comes from?) and/or that the xxx value for such a record is effectively NULL.

If it were a locking situation, the SELECT would just take "a long time" to complete, but it wouldn't return a NULL...

EDIT: (following tomekszpakowicz's astute remark)
"return a NULL" is a rather loose depiction of reality. The only case when a "NULL" would be returned is if effectively a record was found and the value of the underlying column was NULL. In case of errors (be they time-out, connection error, etc.) the variable just doesn't get assigned.
That's why one should systematically check for errors, as suggested in tomekszpakowicz own response.

By changing the update query to a "FROM" form, you can also perform the update in one single trip to the SQL server which is more efficient and possibly of help with locking situations.

mjv
+1 on update-from. It's looks like @clientid is in fact null when you're inserting...maybe there's some out-of-order execution you're not seeing in a debugger?
Broam
In matter of fact if there is no record for the given @clientId SQL Server won't set @s to null. It will leave @s unchanged. Just as if query failed because of an error.
Tomek Szpakowicz
Can you elaborate on update from?
clintp
@clintp, see Quassnoi's response for ex. for the "UPDATE... FROM" form of the UPDATE command.
mjv
A: 

When the first SELECT fails due to a lock or a timeout, the entire stored procedure will be aborted.

The problem is probably that not all clients have a row in is_clientindex. You could make sure a row exists by adding the following SQL at the start of the procedure. This will create a row if there is none:

insert into is_clientindex (clientid, updated) 
select @clientid,0
where not exists (
    select * from where clientid = @clientID
)

For the second part, Quassnoi's single statement approach is a very good idea.

Andomar
A: 

Readers don't block readers, so if inserts and updates on is_clientindex are rare, there will be no contention on this table. Even if SELECTS timed out, it wouldn't just do that silently (and does anyone really allow SELECTs to time out by setting SET LOCK_TIMEOUT?).

My guess is that sometimes the clientid does not exists, which is a condition that will cause @s to not be modified and retain its value of NULL.

And, btw, use scope_identity() instead of @@identity.

erikkallen
+2  A: 

By default SQL Server ignores errors inside T-SQL blocks and just tries to go on. So it all depends on how you use transactions:

  • If you manage transaction by yourself in T-SQL, you need to detect errors there and do a rollback.

  • If transaction is managed by calling application (either explicitly or by automatic transaction per statement) then, after error is reported to the application, transaction can be rolled back there.

In SQL Server 2005 or newer use TRY...CATCH block around procedure body to handle the lock timeout error.

In SQL Server 2000 you should do

IF @@error <> 0 goto ON_ERROR

or something similar after each statement that could fail. This also works in newer versions but is much more work for errors like lock timeout.

Setting LOCK_TIMEOUT will not help. If SQL Server detects an actual deadlock (as opposed to long wait on lock) it will raise an error anyway.


It is still more likely to be a logic error - @s is null after query. You can add check and RAISERROR yourself if it happens. Just to be sure. It's like assert() in normal programming language.

Tomek Szpakowicz
+1 for suggesting error handling !!!
mjv
@mjv: Thanks. Unfortunately T-SQL requires you to do it yourself and most people forget to do it. It is **not** surprising because otherwise SQL Server (like any decent RDBMS) does a good job catching our errors and just fails in T-SQL. And it won't be much better soon because of backwards compatibility.
Tomek Szpakowicz
In this case, there was a permission problem only tangentially related to the description above. The error handling turned this up.
clintp