I did some testing with floating point calculations to minimize the precision loss. I stumbled across a phenomen I want to show here and hopefully get an explanation.
When I write
print 1.0 / (1.0 / 60.0)
the result is
60.0024000960
When I write the same formula and do explicit casting to float
print cast(1.0 as float) / (cast(1.0 as float) / cast(60.0 as float))
the result is
60
Until now I thought that numeric literals with decimal places are automatically treated as float
values with the appropriate precision. Casting to real
shows the same result as casting to float
.
- Is there some documentation on how SQL Server evaluates numeric literals?
- Of what datatype are those literals?
- Do I really have to cast them to
float
get better precision (which sounds like irony to me :)? - Is there an easier way than cluttering my formulas with casts?