tags:

views:

434

answers:

2

A .NET application I'm writing needs to store .NET float, double, and decimal datatypes in an Oracle database. When defining a number column type in Oracle, you have to specify the precision and scale of the number:

NUMBER(p,s)

p is the precision, or the total number of digits. Oracle guarantees the portability of numbers with precision ranging from 1 to 38.

s is the scale, or the number of digits to the right of the decimal point. The scale can range from -84 to 127.

Being the lazy programmer that I am, and I use the title "programmer" loosely, I was hoping someone else has already taken the time to figure out good NUMBER(p,s) defaults for float, double, and decimal .NET datatypes?

+2  A: 

You don't HAVE to specify precision and scale. If you leave them off, Oracle will allow you to specify any precision and scale you like. If you specify them, then oracle will not allow you to exceed the scale you specify, and will round excess precision.

Think of (p,s) as an edit check on your data. By specifying them, you are constraining your data in terms of magnitude and precision. Internally, they are the same to Oracle: a NUMBER column is the same size as a NUMBER(p,s) column.

Your choice of p and s depends more on your application constraints than anything else.

DCookie
The problem with this is that when you come to read you need to read as an oracle decimal and then convert to a double!
Nick R
I don't understand what this has to do with the question...
DCookie
+1  A: 

If you are using a recent version of Oracle, you might want to look into using BINARY_FLOAT or BINARY_DOUBLE instead (for the .net floats/doubles). They use the same IEEE binary representation, so there is no possible data loss when translating to and from the database format.

The max precision of the .NET decimal is 28, so there is no use defining an Oracle number with a higher precision.

Paul-Jan