views:

61

answers:

1

For a scientific project of mine I am developing (in C#4 and T-SQL) an application potentially meant to handle very big quantities of very simple records performing simple operations with them (a scientific simulation engine, not a linear time-series cruncher). I'd like to use 64-bit integers as primary keys for better capacity.

I am going to integrate using of Entity Framework, POCO collections and arrays processing and T-SQL stored procedures practically.

I am going to store a database on an SQL Server 2008 and access it from multiple application instances simultaneously for distributed processing.

SQL Server and application instances are going to be run on 32-bit Windows XP systems, sometimes on completely 64-bit-unaware hardware.

What penalties am I going to face for using 64-bit integer types as primary keys?

+2  A: 

As long as you stick to reading and writing those numbers (ie no arithmetic, just database queries), the performance hit will be negligible. It will be like using 2 ints as parameters instead of 1.

Once you start doing arithmetic on them however, it starts to get messy. Addition and subtraction is roughly 3 times as slow as for normal ints. Multiplication and division is a LOT slower, over an order of magnitude. I posted the code for multiplying 2 64-bit numbers on a 32-bit cpu somewhere on this site, I could look it up if you want, but it's over 3 pages long.

Seeing how you're talking about ID fields however, you shouldn't be doing any arithmetic on them right? So you should be fine.

Blindy