views:

35

answers:

2

Hi at all, i want to understand this: i have a dump of a table (a sql script file) from a database that use float 9,2 as default type for numbers. In the backup file i have a value like '4172.08'. I restore this file in a new database and i convert the float to decimal 20,5. Now the value in the field is 4172.08008 ...where come from the 008?? tnx at all

A: 

This is the difference between float and decimal. Float is a binary type, and can't represent that value exactly. So when you convert to decimal (as expected, a decimal type), its not exactly the original value.

See http://floating-point-gui.de/ for some more information.

Matthew Flaschen
Ok i understand the difference of the types ...but please, explain me how the conversion from float to decimal produce more information (the 008) then the origin (4172.08)
Giovanni Bismondo
It's not the conversion from float to decimal that produces the false precision. It's the conversion from string ('4172.08') to float.
Matthew Flaschen
If you do, e.g. `SELECT float_col * 1.000000000000000` (where float_col is the float value), you can see this.
Matthew Flaschen
ok i understand the problem in finish, thank you guys very much!
Giovanni Bismondo
A: 

To avoid the float inherent precision error, cast first to decimal(9,2), then to decimal(20,5).

Paulo Scardine
nice solutions ;)
Giovanni Bismondo