views:

2247

answers:

4

Let's say I have a simple stored procedure that looks like this (note: this is just an example, not a practical procedure):

CREATE PROCEDURE incrementCounter AS

DECLARE current int
SET current = (select CounterColumn from MyTable) + 1

UPDATE
    MyTable
SET
    CounterColumn = current
GO

We're assuming I have a table called 'myTable' that contains one row, with the 'CounterColumn' containing our current count.

My question is, can this stored procedure be executed multiple times, at the same time?

i.e. is this possible:

I call 'incrementCounter' twice. Call A gets to the point where it sets the 'current' variable (let's say it is 5). Call B gets to the point where it sets the 'current' variable (which would also be 5). Call A finishes executing, then Call B finishes. In the end, the table should contain the value of 6, but instead contains 5 due to the overlap of execution

+8  A: 

This is for SQL Server.

Each statement is atomic, but if you want the stored procedure to be atomic (or any sequence of statements in general), you need to explicitly surround the statements with

BEGIN TRANSACTION
Statement ...
Statement ...
COMMIT TRANSACTION

(It's common to use BEGIN TRAN and END TRAN for short.)

Contrary to a popular misconception, this will work in your case with default transaction level settings.

le dorfier
+1  A: 

I do believe that your situation would be possible. I'd personally wrap that in a transaction to be on the safe side, given that you have multiple steps and possible simultaneous execution. A nice, short way to do it would be:

CREATE PROCEDURE incrementCounter AS

BEGIN TRANSACTION xact_Increment
SET XACT_ABORT ON

DECLARE current int
SET current = (select CounterColumn from MyTable) + 1

UPDATE
    MyTable
SET
    CounterColumn = current
GO

COMMIT TRANSACTION xact_Increment
John Rudy
+5  A: 

In addition to placing the code between a BEGIN TRANSACTION and END TRANSACTION, you might want to ensure that your transaction isolation level is set correctly. In this case, you want to ensure that, once the value of CounterColumn has been read by the first process, it can't be read by the second process until the first one has updated it and then commited the transaction (or rolled it back). This will necessitate that you have an isolation level of SERIALIZABLE. You are not protected unless you do that.

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE

I believe the default level is READ COMMITTED, which is no good in the case of the example you provided.

Dave Cluderay
A: 

Maybe I'm reading too much into your example (and your real situation may be significantly more complicated), but why wouldn't you just do this in a single statement?

CREATE PROCEDURE incrementCounter AS

UPDATE
    MyTable
SET
    CounterColumn = CounterColumn + 1

GO

That way, it's automatically atomic and if two updates are executued at the same time, they'll always be ordered by SQL Server so as to avoid the conflict you describe. If, however, your real situation is much more complicated, then wrapping it in a transaction is the best way to do this.

However, if another process has enabled a "less safe" isolation level (like one that allows dirty reads or non-repeatable reads), then I don't think a transaction will protect against this, as another process can see into the partially updated data if it's elected to allow unsafe reads.

rwmnau