As Paul writes: No, it's not safe, for which I would like to add empirical evidence: Create a table Table_1
with one field ID
and one record with value 0
. Then execute the following code simultaneously in two Management Studio query windows:
declare @counter int
set @counter = 0
while @counter < 1000
begin
set @counter = @counter + 1
INSERT INTO Table_1
SELECT MAX(ID) + 1 FROM Table_1
end
Then execute
SELECT ID, COUNT(*) FROM Table_1 GROUP BY ID HAVING COUNT(*) > 1
On my SQL Server 2008, one ID (662
) was created twice. Thus, the default isolation level applied to single statements is not sufficient.
EDIT: Clearly, wrapping the INSERT
with BEGIN TRANSACTION
and COMMIT
won't fix it, since the default isolation level for transactions is still READ COMMITTED
, which is not sufficient. Note that setting the transaction isolation level to REPEATABLE READ
is also not sufficient. The only way to make the above code safe is to add
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
at the top. This, however, caused deadlocks every now and then in my tests.
EDIT: The only solution I found which is safe and does not produce deadlocks (at least in my tests) is to explicitly lock the table exclusively (default transaction isolation level is sufficient here). Beware though; this solution might kill performance:
...loop stuff...
BEGIN TRANSACTION
SELECT * FROM Table_1 WITH (TABLOCKX, HOLDLOCK) WHERE 1=0
INSERT INTO Table_1
SELECT MAX(ID) + 1 FROM Table_1
COMMIT
...loop end...