views:

221

answers:

4

I in the process of designing a database for high volume data and I was wondering what datatype to use for the primary keys?

There will be table partitioning and the database will ultimatley be clustered and will be hot failover to alternative datacentres.

EDIT


Tables - think chat system for multiple time periods and multiple things to chat about with multiple users chatting about the time period and thing.

Exponential issues are what I am thinking about - ie something could generate billions of rows in small time period. ie before we could change the database or DBA doing DBA things

Mark - I share your concearn of GUID - I dont like coding with GUIDs flying about.

+1  A: 

You can always go for int but taking into account your partitioning/clustering I'd suggest you look into uniqueidentifier which will generate globally unique keys.

duckyflip
A: 

I thik that int will be very good for it.

The range of INTEGER is - 2147483648 to 2147483647.

also you can use UniqueIdentifier (GUID), but in this case

  • table row size limit in MSSQL
  • storage + memory. Imagine you have tables with 10000000 rows and growing
  • flexibility: there are T-SQL operators available for INT like >, <, =, etc...
  • GUID is not optimized for ORDER BY/GROUP BY queries and for range queries in general
ole6ka
+1  A: 

int tends to be the norm unless you need massive volume of data, and has the advantage of working with IDENTITY etc; Guid has some advantages if you want the numbers to be un-guessable or exportable, but if you use a Guid (unless you generate it yourself as "combed") you should ensure it is non-clustered (the index, that is; not the farm), as it won't be incremental.

Marc Gravell
+3  A: 

With just the little bit of info you've provided, I would recommend using a BigInt, which would take you up to 9,223,372,036,854,775,807, a number you're not likely to ever exceed. (Don't start with an INT and think you can easily change it to a BigInt when you exceed 2 billion rows. Its possible (I've done it), but can take an extremely long time, and involve significant system disruption.)

Kimberly Tripp has an Excellent series of blog articles (GUIDs as PRIMARY KEYs and/or the clustering key and The Clustered Index Debate Continues) on the issue of creating clustered indexes, and choosing the primary key (related issues, but not always exactly the same). Her recommendation is that a clustered index/primary key should be:

  1. Unique (otherwise useless as a key)
  2. Narrow (the key is used in all non-clustered indexes, and in foreign-key relationships)
  3. Static (you don't want to have to change all related records)
  4. Always Increasing (so new records always get added to the end of the table, and don't have to be inserted in the middle)

If you use a BigInt as an increasing identity as your key and your clustered index, that should satisfy all four of these requirements.

Edit: Kimberly's article I mentioned above (GUIDs as PRIMARY KEYs and/or the clustering key) talks about why a (client generated) GUID is a bad choice for a clustering key:

But, a GUID that is not sequential - like one that has it's values generated in the client (using .NET) OR generated by the newid() function (in SQL Server) can be a horribly bad choice - primarily because of the fragmentation that it creates in the base table but also because of its size. It's unnecessarily wide (it's 4 times wider than an int-based identity - which can give you 2 billion (really, 4 billion) unique rows). And, if you need more than 2 billion you can always go with a bigint (8-byte int) and get 263-1 rows.

SQL has a function called NEWSEQUENTIALID() that allows you to generate sequential GUIDs that avoid the fragmentation issue, but they still have the problem of being unnecessarily wide.

BradC
Hi Brad - thanks for this detailed answer - I will be leaving it open till next week and I hope others vote this up as it answers by question at the moment. Just wanted to say thanks.

related questions