A .NET application I'm writing needs to store .NET float, double, and decimal datatypes in an Oracle database. When defining a number column type in Oracle, you have to specify the precision and scale of the number:
NUMBER(p,s)
p is the precision, or the total number of digits. Oracle guarantees the portability of numbers with precision ranging from 1 to 38.
s is the scale, or the number of digits to the right of the decimal point. The scale can range from -84 to 127.
Being the lazy programmer that I am, and I use the title "programmer" loosely, I was hoping someone else has already taken the time to figure out good NUMBER(p,s) defaults for float, double, and decimal .NET datatypes?