I have this database that I'm designing.
It needs to contain a couple dozen tables with records that we provide (a bunch of defaults) as well as records that the user can add. In order to keep the user from shooting himself in the foot, it's necessary to keep him from modifying the default records.
There are lots of ways to facilitate this, but I like the idea of giving protected records negative integer indexes, while reserving 0 as an invalid record id and giving user records positive integer indexes.
CREATE TABLE t1 (
ixt1 integer AUTOINCREMENT,
d1 double,
CONSTRAINT pk_ixt1 PRIMARY KEY (ixt1),
CONSTRAINT ch_zero CHECK (ixt1 <> 0)
);
-2 | 171.3 <- canned record
-1 | 100.0 <- canned record
1 | 666.6 <- user record
Reasons this seems good:
it doesn't use significantly more space
it's easy to understand
it doesn't require lots of additional tables to implement
"select * from table" gets all the pertinent records, with no additional indirection
the canned records can grow in the negative direction, and the user records can grow in the positive direction
However, I'm relatively new to database design. And after using this solution for a little while, I'm starting to worry that using negative indexes might be bad, because
Negative indexes might not be supported consistently among different DBMSs, making it difficult to write code that is database-agnostic
It might be just too easy to screw stuff up by inserting something at recid 0
It might make it hard to use tools (like db grids, perhaps) that expect integer indexes with nonnegative values.
And maybe there are some other really obvious reasons that would make this a Very Bad Idea.
So what's the definitive answer? Are negative integer indexes evil?