Hi at all, i want to understand this: i have a dump of a table (a sql script file) from a database that use float 9,2 as default type for numbers. In the backup file i have a value like '4172.08'. I restore this file in a new database and i convert the float to decimal 20,5. Now the value in the field is 4172.08008 ...where come from the 008?? tnx at all
views:
35answers:
2
A:
This is the difference between float and decimal. Float is a binary type, and can't represent that value exactly. So when you convert to decimal (as expected, a decimal type), its not exactly the original value.
See http://floating-point-gui.de/ for some more information.
Matthew Flaschen
2010-10-17 17:29:01
Ok i understand the difference of the types ...but please, explain me how the conversion from float to decimal produce more information (the 008) then the origin (4172.08)
Giovanni Bismondo
2010-10-17 17:33:54
It's not the conversion from float to decimal that produces the false precision. It's the conversion from string ('4172.08') to float.
Matthew Flaschen
2010-10-17 17:47:41
If you do, e.g. `SELECT float_col * 1.000000000000000` (where float_col is the float value), you can see this.
Matthew Flaschen
2010-10-17 17:50:09
ok i understand the problem in finish, thank you guys very much!
Giovanni Bismondo
2010-10-17 17:53:39
A:
To avoid the float inherent precision error, cast first to decimal(9,2), then to decimal(20,5).
Paulo Scardine
2010-10-17 17:45:01