views:

56

answers:

2

To illustrate the problem, I make an example:

A tag_bundle consists of one or more than one tags. A unique tag combination can map to a unique tag_bundle, vice versa.

 tag_bundle                   tag            tag_bundle_relation
 +---------------+        +--------+      +---------------+--------+
 | tag_bundle_id |        | tag_id |      | tag_bundle_id | tag_id |
 +---------------+        +--------+      +---------------+--------+
 |       1       |        | 100    |      |       1       |  100   |
 +---------------+        +--------+      +---------------+--------+
 |       2       |        | 101    |      |       1       |  101   |
 +---------------+        +--------+      +---------------+--------+ 
                          | 102    |      |       2       |  101   |
                          +--------+      +---------------+--------+  
                                          |       2       |  102   |
                                          +---------------+--------+

There can't be another tag_bundle having exactly the same combination from tag 100 and tag 101. There can't be another tag_bundle having exactly the same combination from tag 101 and tag 102.

How can I ensure such unique constraint when executing SQL "concurrently"!! that is, to prevent concurrently adding two bundles with exactly the same tag combination

Adding a simple unique constraint on any table does not work, Is there any solution other than Trigger or explicit lock.

I come to only this simple way: make tag combination into string, and let it be a unique column.

tag_bundle  (unique on tags)         tag            tag_bundle_relation
 +---------------+-----------+      +--------+      +---------------+--------+
 | tag_bundle_id |  tags     |      | tag_id |      | tag_bundle_id | tag_id |
 +---------------+-----------+      +--------+      +---------------+--------+
 |       1       | "100,101" |      | 101    |      |       1       |  101   |
 +---------------+-----------+      +--------+      +---------------+--------+
                                    | 100    |      |       1       |  100   |
                                    +--------+      +---------------+--------+ 

but it seems not a good way :(

+1  A: 

Why the constraint of 'without a trigger'? With it, combined with a bit of data duplication, you can get what you need. Change your 'tags' field in your solution to an array field of INTEGERs (or whatever type tag_id is)

While recognising the unpleasantness of the solution, I don't see a way round it. Though I would use an array instead of a string for 'tags', put it in a separate table from tag_bundle, still make it unique and put a trigger on tag_bundle_relation to update the tags field with array_agg(tag_id) (>8.4), and if that fails, fail the trigger update.

MkV
A: 

In order to work correctly when multiple transactions will be updating the tables, you will need to create a deferable, initially deferred, constraint trigger.

Stephen Denne
Correct me if I'm wrong, but -- unless you serialize the function call -- a trigger will not save you from race conditions, because changes made by other transactions executing in parallel are not visible.A *deferred* constraint trigger *reduces* the window of opportunity for race conditions, but it's still possible for two such triggers to run in parallel, without seeing each others' (yet-uncommited) data.
intgr
@intgr excellent question. I don't know the answer. I presumed that committing transactions was serialized, and that therefore so was the processing of trigger functions deferred to when the transaction commits. Perhaps Serializable Isolation Level is required to guarantee that. If the decision whether an update is going to be allowed or not is dependent on whether other concurrent transactions commit or rollback, then that other commit or rollback needs to complete first, along with its deferred triggers.
Stephen Denne