views:

35

answers:

2

I am running this sql code in SQL 2005

declare @a as float(24)
set @a=0.85
select cast ( @a as float(53))

and the result is

0.850000023841858

does anyone know why?

Thanks.

+2  A: 

The first seven digits are the default level of precision for a float when in the declaration float(N) the value of N is 24 or less. After that, the digits could show up as pretty much anything when cast to a float of greater precision. That's the 'floating point'.

http://msdn.microsoft.com/en-us/library/ms173773.aspx

CodeByMoonlight
Thanks for the quick reply.Later on, I found this as well which supports what you said.http://support.microsoft.com/kb/125056
N30
+1  A: 

The number you see is as close as the computer can get within however many binary digits it has available to use.

If you try and write 1/3 in decimal, but you only have enough space for 8 digits, the closest you can get is 0.33333333. That's still off by a quite a way, but if you had more decimal places you could get more accurate. This is exactly the same probably as the computer faces, but whereas each of your successive digits represents 1/10ths, 1/100ths, 1/1000ths, the computer works in 1/2, 1/4, 1/8, 1/16.

Gareth
thanks for the clarification on how float works
N30