views:

55

answers:

4

So yesterday we had a table that has an auto_increment PK for a smallint that reached its maximum. We had to alter the table on an emergency basis, which is definitely not how we like to roll.

Is there an easy way to report on how close each auto_increment field that we use is to its maximum? The best way I can think of is to do a SHOW CREATE TABLE statement, parse out the size of the auto-incremented column, then compare that to the AUTO_INCREMENT value for the table.

On the other hand, given that the schema doesn't change very often, should I store information about the columns' maximum values and get the current AUTO_INCREMENT with SHOW TABLE STATUS?

+1  A: 

smallint with auto-increment sounds like a poor design choice. The best way to fix the problem is to make a better choice on a non-emergency basis.

ddyer
not really an answer to the question...
Sam Holder
+1: Knowing your data and its use is key to design and database maintenance.
OMG Ponies
We're trying to keep the data model consistent with the open-source project from which we pull changes. Hopefully, we'll push any data model changes, particularly those that make integer column sizes don't-care, back upstream.
David M
A: 

just make it int unsigned and forget about it's value forever.

Col. Shrapnel
See my comment above about why we're averse to data model changes.
David M
A: 

One solution I was able to find was using a an advisor.

LINK

the advisor monitors your database for any changes to the schema, memory usage, performance, etc.

but it also allows you to Build custom Rules tailored to the needs of your particular MySQL environment. In essence you could create a your own monitoring advisor, which I have noticed someone doing, however did not explain how. therefor you could send a warning when you limit is about to be reached.

and a solution to reaching the limit could be to create another column with a second ID, making the PK become a composite primary key effectively extending the amount of id's you have without having to change the data type.

ex:

add second PK ID field called 'Whatever'

lets say your limit is 1024 rows to be added under auto increment, your second column auto increment after the first one has reached it's limit, resetting the first one to 1 and having the second jump to 2

1023 1
1024 1
1    2
2    2
etc.

there would need to be references made in the other tables but doesn't require you to mess with the data type. this is by far not the best solution to the problem, and you might have thought of something better already, but I figure I might add it just in case it makes you think of another solution.

either way, however you decide to fix/monitor the limit being reached another database should be planned out and created. a monitor or a another field or however you managed to fix the database in a hurry is only a band aid on the real problem at hand, because if you have other fields like that, with small limits, then you don't want to be stuck the day that more then one field decides to reach it's limit at the same time.

hope this helps somewhat.

Justin Gregoire
A: 

Hi David. Your question seems perfectly reasonable to me. You should be able to get the current auto-increment values for each table from information_schema. I don't think the max values for the various int types are available as constants in MySQL, but Roland Bouman demonstrated a simple way to generate them in MySQL:

http://stackoverflow.com/questions/2679064/in-sql-how-do-i-get-the-maximum-value-for-an-integer/2679152

If you put that data into a table, then you can write a single SQL query to get the current auto-increment status of all of your tables so you can see how close you are to running out of values.

Here's a quick-and-dirty example to get you started:

create temporary table max_int_values
(
int_type varchar(10) not null,
extra varchar(8) not null default '',
max_value bigint unsigned not null,
primary key (int_type,max_value),
key int_type (int_type),
key max_value (max_value)
);

insert into max_int_values(int_type,extra,max_value) values ('tinyint','',~0 >> 57);
insert into max_int_values(int_type,extra,max_value) values ('tinyint','unsigned',~0 >> 56);
insert into max_int_values(int_type,extra,max_value) values ('smallint','',~0 >> 49);
insert into max_int_values(int_type,extra,max_value) values ('smallint','unsigned',~0 >> 48);
insert into max_int_values(int_type,extra,max_value) values ('mediumint','',~0 >> 41);
insert into max_int_values(int_type,extra,max_value) values ('mediumint','unsigned',~0 >> 40);
insert into max_int_values(int_type,extra,max_value) values ('int','',~0 >> 33);
insert into max_int_values(int_type,extra,max_value) values ('int','unsigned',~0 >> 32);
insert into max_int_values(int_type,extra,max_value) values ('bigint','',~0 >> 1);
insert into max_int_values(int_type,extra,max_value) values ('bigint','unsigned',~0);

select t.table_Schema,t.table_name,c.column_name,c.column_type,
  t.auto_increment,m.max_value,
  round((t.auto_increment/m.max_value)*100,2) as pct_of_values_used,
  m.max_value - t.auto_increment as values_left
from information_schema.tables t
  inner join information_schema.columns c 
    on c.table_Schema = t.table_Schema and c.table_name = t.table_name
  inner join max_int_values m 
    on m.int_type = substr(c.column_type,1,length(m.int_type)) 
    and ((m.extra like '%unsigned') = (c.column_type like '%unsigned'))
where c.extra = 'auto_increment'
order by pct_of_values_used;
Ike Walker
This is overkill for my particular needs, but it looks like a good general solution.
David M
@David: Why do you think this is overkill? I think this is exactly what you asked for.
Ike Walker
It's overkill "for my particular needs." For the general case, this is indeed exactly what I asked for.
David M