views:

54

answers:

3

Hi guys!

I'm converting a MySQL table to PostgreSQL for the first time in my life and running into the traditional newbie problem of having no auto_increment.

Now I've found out that the postgres solution is to use a sequence and then request the nextval() of this sequence as the default value every time you insert. I've also read that the SERIAL type creates a sequence and a primary key automatically, and that nextval() increments the counter even when called inside transactions to avoid locking the sequence.

What I can't find addressed is the issue of what happens when you manually insert values into a field with a UNIQUE or PRIMARY constraint and a nextval() of a sequence as default. As far as I can see, this causes the INSERT to fail when the sequence reaches that value.

Is there a simple (or common) way to fix this ?

A clear explanation would be very much appreciated.

Update: If you feel I shouldn't do this, will never be able to fix this or am making some flawed assumptions, please feel free to point them out in your answers. Above all, please tell me what to do instead to offer programmers a stable and robust database that can't be corrupted with a simple insert (preferably without hiding everything behind stored procedures)

+3  A: 

If you're migrating your data then I would drop the sequence constraint on the column, perform all of your inserts, use setval() to set the sequence to the maximum value of your data and then reinstate your column sequence nextval() default.

Nev Stokes
But that's just avoiding the problem :) I want a database that won't fail even if someone makes an insert with a fixed ID where the default is a generated one.
Michael Clerx
If the fixed ID already exists you probably will have issues. Best not to insert manually at all I'd say. You could write an insert trigger to call a function that checks the sequence values and inserted id but I wouldn't recommend it.
Nev Stokes
Why not? What's the downside?
Michael Clerx
This seems the right approach to me. If you are migrating existing data, preserving the key will make your life simpler. Once you have migrated the data, if you are using autoincrementing keys then you should not be setting the primary key manually. You need to be consistent.
Nic Gibson
At the risk of stoking the holy war flames, the ONLY way to completely avoid the problem of manually inserted record IDs colliding with auto-generated IDs is not to use them. It's one of the inherent problems with surrogate keys.
Matthew Wood
A: 

I don't exactly understand you question, but if your goal is just to do the insert, and have a valid field (e.g. an id), then insert the values without the id field, that's what "default" stands for. It will work.

E.g. havin a id serial NOT NULL and a CONSTRAINT table_pkey PRIMARY KEY(id) in the table definition will auto-set the id and auto-increment a sequence table_id_seq.

knitti
Thank you, but I want to know how to avoid getting an error if you create a new table, manually insert a field with id 4, and then insert fields without id set for a while.
Michael Clerx
+1  A: 

You can create a trigger which will check if currval('id_sequence_name')>=NEW.id.

If your transaction did not use default value or nextval('id_sequence_name'), then a currval function will throw an error, as it works only when sequence was updated in current session. If you use nextval and then try to insert bigger primary key then it will throw another error. A transaction will be then aborted.

This would prevent inserting any bad primary keys which would break serial.

Example code:

create table test (id serial primary key, value text);

create or replace function test_id_check() returns trigger language plpgsql as
$$ begin
  if ( currval('test_id_seq')<NEW.id ) then
    raise exception 'currval(test_id_seq)<id';
  end if;
  return NEW;
end; $$;

create trigger test_id_seq_check before insert or update of id on test
  for each row execute procedure test_id_check();

Then inserting with default primary key will work fine:

insert into test(value) values ('a'),('b'),('c'),('d');

But inserting too big primary key will error out and abort:

insert into test(id, value) values (10,'z');
Tometzky