Hi, in this SO question the OP wanted to drop the 0's in the decimal places for his results. Now the example I gave (below) to get around this was to CAST with DECIMAL, then CAST with FLOAT.
e.g.
SELECT CAST(0.55 AS FLOAT)
Using the example above and running it in SQL Server 2005/2008 would seem to bring up the correct result of 0.55. But as Peter in the other post pointed out, running it in SQL Server 2000 produces 0.55000000000000004.
So my questions are:
Is FLOAT to be avoided at all cost when it comes to data conversion in SQL?
Why does
cast(0.55 as float)
yields0.55000000000000004
in SQL2k yet0.55
in later edtions?Has Microsoft made using FLOAT more reliable in later versions of SQL Server?
Thanks for your time.