views:

264

answers:

5

Is there any performance impact or any kind of issues? The reason I am doing this is that we are doing some synchronization between two set of DBs with similar tables and we want to avoid duplicate PK errors when synchronizing data.

+5  A: 

Yes, it's okay.

Note: If you have perfomance concerns you could use the "CACHE" option on "CREATE SEQUENCE":

"Specify how many values of the sequence the database preallocates and keeps in memory for faster access. This integer value can have 28 or fewer digits. The minimum value for this parameter is 2. For sequences that cycle, this value must be less than the number of values in the cycle. You cannot cache more values than will fit in a given cycle of sequence numbers. Therefore, the maximum value allowed for CACHE must be less than the value determined by the following formula:"

(CEIL (MAXVALUE - MINVALUE)) / ABS (INCREMENT)

"If a system failure occurs, all cached sequence values that have not been used in committed DML statements are lost. The potential number of lost values is equal to the value of the CACHE parameter."

splattne
+3  A: 

Sure. What you plan on doing is actually a rather common practice. Just make sure the variables in your client code which you use to hold IDs are big enough (i.e., use longs instead of ints)

James Curran
+3  A: 

The only problem we recently had with creating tables with really large seeds was when we tried to interface with a system we did not control. That system was apparently reading our IDs as a char(6) field, so when we sent row 10000000 it would fail to write.

Performance-wise we have seen no issues on our side with using large ID numbers.

JoshBaltzell
+1  A: 

No performance impact that we've seen. I routinely bump sequences up by a large amount. The gaps come in handy if you need to "backfill" data into the table.

The only time we had a problem was when a really large sequence exceeded MAXINT on a particular client program. The sequence was fine, but the conversion to an integer in the client app started failing! In our case it was easy to refactor the ID column in the table and get things running again, but in retrospect this could have been a messy situation if the tables had been arranged differently!

Nack
A: 

If you are synching two tables why not change the PK seed/increment amount so that everything takes care of itself when a new PK is added?

Let's say you had to synch the data from 10 patient tables in 10 different databases.
Let's also say that eventually all databases had to be synched into a Patient table at headquarters.

Increment the PK by ten for each row but ensure the last digit was different for each database.

DB0 10,20,30..
DB1 11,21,31..
.....
DB9 19,29,39..

When everything is merged there is guaranteed to be no conflicts.

This is easily scaled to n database tables. Just make sure your PK key type will not overflow. I think BigInt could be big enough for you...

Dining Philanderer