Hello, I am making an application to run on both SQL Server and PostgreSQL, so I am asking this question about both of them.
What happens whenever you create a unique primary key(using a sequence or auto-increment type thing) and you hit over 4 billion records(32 bits)? I'm not saying that our table will have 4 billion records but rather that 4 billion records have been created because the RID only increments. So even if I deleted 3.9 billion of these records, my RIDs are still in the 4 billion range. So what happens? Does it up the precision to 64 bit or does it roll over to 0 or just spit out a very critical error? Should I worry that even a 64 bit RID may eventually overflow also?
Also, how can I fight against this? Is there some kind of cleaning up option or tool? Do I just have to create my own thing to every year or so completely rebuild the table to get consistent RIDs? (and thus also touch a lot of other tables that use these RIDs are foreign keys)