views:

102

answers:

4

I have an application that uses incident numbers (amongst other types of numbers). These numbers are stored in a table called "Number_Setup", which contains the current value of the counter.

When the app generates a new incident, it number_setup table and gets the required number counter row (counters can be reset daily, weekly, etc and are stored as int's). It then incremenets the counter and updates the row with the new value.

The application is multiuser (approximately 100 users at any one time, as well as sql jobs that run and grab 100's of incident records and request incident numbers for each). The incident table has some duplicate incident numbers where they should not be duplicate.

A stored proc is used to retrieve the next counter.

SELECT @Counter = counter, @ShareId=share_id, @Id=id
FROM Number_Setup
WHERE LinkTo_ID=@LinkToId
AND Counter_Type='I'

IF isnull(@ShareId,0) > 0
BEGIN 
    -- use parent counter
    SELECT @Counter = counter, @ID=id
    FROM Number_Setup
    WHERE Id=@ShareID
END

SELECT @NewCounter = @Counter + 1

UPDATE Number_Setup SET Counter = @NewCounter
WHERE id=@Id

I've now surrounded that block with a transaction, but I'm not entirely sure it' will 100% fix the problem, as I think there's still shared locks, so the counter can be read anyway.

Perhaps I can check that the counter hasn't been updated, in the update statement

UPDATE Number_Setup SET Counter = @NewCounter
WHERE Counter = @Counter
IF @@ERROR = 0 AND @@ROWCOUNT > 0 
    COMMIT TRANSACTION
ELSE
    ROLLBACK TRANSACTION

I'm sure this is a common problem with invoice numbers in financial apps etc.
I cannot put the logic in code either and use locking at that level. I've also locked at HOLDLOCK but I'm not sure of it's application. Should it be put on the two SELECT statements?

How can I ensure no duplicates are created?

A: 

have you tried using GUIDs instead of autoincrements as your unique identifier?

Jason
A: 

If you have the ablity to modify your job that gets mutiple records, I would change the thinking so that your counter is an identity column. Then when you get the next record you can just do an insert and get the @@identity of the table. That would ensure that you get the biggest number. You would also have to do a dbccReseed to reset the counter instead of just updating the table when you want to reset the identity. The only issue is that you'd have to do 100 or so inserts as part of your sql job to get a group of identities. That may be too much overhead but using an identity column is a guarenteed way to get unique numbers.

Avitus
scope_identity() would be better than @@identity to ensure you get back the identity you have just created.
adrianbanks
A: 

I might be missing something, but it seems like you are trying to reinvent technology that has already been solved by most databases.

instead of reading and updating from the 'Counter' column in the Number_Setup table, why don't you just use an autoincrementing primary key for your counter? You'll never have a duplicate value for a primary key.

brad
because the counter has to generate a number that resets every day, month, or other customer defined period. different customers have incident numbers that reset daily and some never reset. The incident number gets placed into a field on the incident table. that field isn't unique, just indexed.the existing stored proc has been in place for over 10 years :)
rob_g
+3  A: 

The trick is to do the counter update and read in a single atomic operation:

UPDATE Number_Setup SET Counter = Counter+1
OUTPUT INSERTED.Counter 
WHERE id=@Id;

This though does not assign the new counter to @NewCounter, but instead returns it as a result set to the client. If you have to assign it, use an intermediate table variable to output the new counter INTO:

declare @NewCounter int;
declare @tabCounter table (NewCounter int);
UPDATE Number_Setup SET Counter = Counter+1
OUTPUT INSERTED.Counter INTO @tabCounter (NewCounter)
WHERE id=@Id
SELECT @NewCounter = NewCounter FROM @tabCounter;

This solves the problem of making the Counter increment atomic. You still have other race conditions in your procedure because the LinkTo_Id and share_id can still be updated after the first select so you can increment the counter of the wrong link-to item, but that cannot be solved just from this code sample as it depends also on the code that actualy updates the shared_id and/or LinkTo_Id.

BTW you should get into the habbit of name your fields with consistent case. If they are named consistently then you must use the exact match case in T-SQL code. Your scripts run fine now just because you have a case insensitive collation server, if you deploy on a case sensitive collation server and your scripts don't match the exact case of the field/tables names errors will follow galore.

Remus Rusanu
thanks for the feedback. the schema is outside our control, and my example has made up field names and isn't a direct copy of the actual sproc :)
rob_g
Very helpful, thanks!
Avi Flax