views:

409

answers:

2

The database I am working on right now has records being added and removed by the 10s of thousands and because of this there are gaps in the auto-incremented key hundreds of thousands big and auto-increment numbers well into the billions.

These numbers are never stored to reference the individual record but are used to reference the record when doing on-the-fly calculations.

Is there any reason to remove these gaps and reset the auto-increment number or is this inconsequential?

The id field is an unsigned int, should I increase it to an unsigned big int? From what I understand, right now if it hits 4,294,967,295 it will break.

+3  A: 

The only reason I'd worry about it is if you find yourself close to that 2^32 limit. If you're not using the column as a row id, then don't even worry about it.

EDIT If you are using this column for any kind of identifying information, then I'd switch the column over to a GUID or something, because you're gonna get overflow, and then you'll get duplicate values. And that's no bueno.

Bryan Ross
"auto-increment numbers well into the billions." 2^32 is only 4 billion, give or take a few hundred million. Perhaps now is a good time to start worrying before it's too late.
Mark Byers
+1  A: 

I don't know what the growth rate of your autoincrement field is, but it should be simple math for you to estimate when you will hit the 4294967295 limit.

If you still feel that you need to do something, you have the following options:

  1. reset the current count to 1. Do this ideally by dropping the column and recreating it. Since you are not using this field for referential integrity, should be a quick and simple fix until the next time...
  2. Change the datatype to an unsigned BIGINT. Now you can go up to 18446744073709551615. But you need more space in the heap to store this increased amount of data, and you have only postponed your problem.
  3. Change from an autoincrement (INT / BIGINT) to a UUID. Then you can stop worrying about numbers and the nature of infinity, but you most likely will have to change all of your client code.

On a separate note, I sense a poor decision or two somewhere earlier up the line here.

Richard Quinn
Agreed, Why exactly are you deleting records in the 10s of thousands? You might want to rethink your normalization possibly.
MindStalker
The table is used to store research data. Once the calculations are completed there is no reason to keep the calculation data any longer and is deleted.
But thanks for the tip on this UUID, I will look into this.