The documentation states: "Precision can range from 1 to 38. Scale can range from -84 to 127".
How can the scale be larger than the precision? Shouldn't the Scale range from -38 to 38?
The documentation states: "Precision can range from 1 to 38. Scale can range from -84 to 127".
How can the scale be larger than the precision? Shouldn't the Scale range from -38 to 38?
Hmm as I understand the reference the precision is the count of digits.
maximum precision of 126 binary digits, which is roughly equivalent to 38 decimal digits
In oracle you have type NUMBER(precision,scale) where precision is total number of digits and scale is number of digits right of decimal point. Scale can be omitted, but it means zero. Precision can be unspecified (use i.e. NUMBER(*,10)) - this means total number of digits is as needed, but there are 10 digits right
If the scale is less than zero, the value will be rounded to scale
digits left the decimal point.
I think that if you reserve more numbers right of the decimal point than there can be in the whole number, this means something like 0.00000000123456 but I am not 100% sure.
The question could be why not ? Try the following SQL.
select cast(0.0001 as number(2,5)) num, to_char(cast(0.0001 as number(2,5))) cnum,
dump(cast(0.0001 as number(2,5))) dmp
from dual
What you see is that you can hold small numbers is that sort of structure It might not be required very often, but I'm sure somewhere there is someone who is storing very precise but very small numbers.
Thanks to everyone for the answers. It looks like the precision is the number of significant digits.
select cast(0.000123 as number(2,5)) from dual
results in:
.00012
Where
select cast(0.00123 as number(2,5)) from dual
and
select cast(0.000999 as number(2,5)) from dual
both result in:
ORA-01438: value larger than specified precision allowed for this column
the 2nd one due to rounding.