views:

344

answers:

4

Hi,

I'd need advice on following situation with Oracle/PostgreSQL:

I have a db table with a "running counter" and would like to protect it in the following situation with two concurrent transactions:

T1       T2
SELECT MAX(C) FROM TABLE WHERE CODE='xx'
-- C for new : result + 1
         SELECT MAX(C) FROM TABLE WHERE CODE='xx';
         -- C for new : result + 1
         INSERT INTO TABLE... 
INSERT INTO TABLE...

So, in both cases, the column value for INSERT is calculated from the old result added by one.

From this, some running counter handled by the db would be fine. But that wouldn't work because

  • the counter values or existing rows are sometimes changed
  • sometimes I'd like there to be multiple counter "value groups" (as with the CODE mentioned) : with different values for CODE the counters would be independent.

With some other databases this can be handled with SERIALIZABLE isolation state but at least with Oracle&Postgre the phantom reads are prevented but as the result the table ends up with two distinct rows with same counter value. This seems to have to do with the predicate locking, locking "all the possible rows covered by the query" - some other db:s end up to lock the whole table or something..

SELECT ... FOR UPDATE -statements seem to be for other purposes and don't even seem to work with MAX() -function.

Setting an UNIQUE contraint on the column would probably be the solution but are there some other ways to prevent the situation?

b.r. Touko

EDIT: One more option could probably be manual locking even though it doesn't appear nice to me..

A: 

This is why you should use the Serial datatype, which defers the lookup of C to the time of insert (which uses table locks i presume). You would then not specify C, but it would be generated automatically. If you need C for some intermediate calculation, you would need to save first, then read C and finally update with the derived values.

Edit: Sorry, I didn't read your whole question. What about solving your other problems with normalization? Just create a second table for each specific type (for each x where A='x'), where you have another auto increment. Manually edited sequences could be another column in the same table, which uses the generated sequence as a base (i.e if pk = 34 you can have another column mypk='34Changed').

disown
Sorry, I was maybe a bit unclear : This serial seems to be the case of "running counter handled by the db" mentioned : Sometimes I'd need the counter to run within "groups" - different counters for values within some other column. I'll try to clarify the question a bit more.
Touko
+4  A: 

Both Oracle and PostgreSQL support what's called sequences and the perfect fit for your problem. You can have a regular int column, but define one sequence per group, and do a single query like

--PostgreSQL
insert into table (id, ... ) values (nextval(sequence_name_for_group_xx), ... )

--Oracle
insert into table (id, ... ) values (sequence_name_for_group_xx.nextval, ... )

Increments in sequences are atomic, so your problem just wouldn't exist. It's only a matter of creating the required sequences, one per group.

Vinko Vrsalovic
Oracle does sequences as well, with a similar kind of syntax: Insert into table (id, ...) values (sequence_name.nextval, ...)
Thomas Jones-Low
Thanks, Thomas. Added documentation links.
Vinko Vrsalovic
A: 
  • the counter values or existing rows are sometimes changed

You should to put a unique constraint on that column if this would be a problem for your app. Doing so would guarantee a transaction at SERIALIZABLE isolation level would abort if it tried to use the same id as another transaction.

One more option could probably be manual locking even though it doesn't appear nice to me..

Manual locking in this case is pretty easy: just take a SHARE UPDATE EXCLUSIVE or stronger lock on the table before selecting the maximum. This will kill concurrent performance, though.

  • sometimes I'd like there to be multiple counter "value groups" (as with the CODE mentioned) : with different values for CODE the counters would be independent.

This leads me to the Right Solution for this problem: sequences. Set up several sequences, one for each "value group" you want to get IDs in their own range. See Section 9.15 of The Manual for the details of sequences and how to use them; it looks like they're a perfect fit for you. Sequences will never give the same value twice, but might skip values: if a transaction gets the value '2' from a sequence and aborts, the next transaction will get the value '3' rather than '2'.

kquinn
A: 

Hi,

You can create sequential collumn by using sequence as default value:

First, you have to create the sequence counter:

CREATE SEQUENCE SEQ_TABLE_1 START WITH 1 INCREMENT BY 1;

So, you can use it as default value:

CREATE TABLE T (
  COD NUMERIC(10)  DEFAULT NEXTVAL('SEQ_TABLE_1') NOT NULL,
  collumn1...
  collumn2...
);

Now you don't need to worry about sequence on inserting rows:

INSERT INTO T (collumn1, collumn2) VALUES (value1, value2);

Regards.

Christian Almeida