I'm seeing conflicting references in Oracles documentation. Is there any difference between how decimals are stored in a FLOAT and a NUMBER types in the database?
As I recall from C, et al, a float has accuracy limitations that an int doesn't have. R.g., For 'float's, 0.1(Base 10) is approximated as 0.110011001100110011001101(Base 2) which equals roughtly something like 0.100000001490116119384765625 (Base 10). However, for 'int's, 5(Base 10) is exactly 101(Base 2).
Which is why the following won't terminate as expected in C:
int i;
i = 0;
for (i=0; i <> 10; )
{
i += 0.1
}
However I see elsewhere in Oracle's documentation that FLOAT has been defined as a NUMBER. And as I understand it, Oracle's implementation of the NUMBER type does not run into the same problem as C's float.
So, what's the real story here? Has Oracle deviated from the norm of what I expect to happen with floats/FLOATs?
(I'm sure it's a bee-fart-in-a-hurricane of difference for what I'll be using them for, but I know I'm going to have questions if 0.1*10 comes out to 1.00000000000000001)