I'm dividing some integers x & y in MS SQL, and I wan the result to be in a floating-point form. 5/2 should equal 2.5. When I simply do
SELECT 5/2
I get 2, which doesn't suprise me, since it's creating an int from two ints. I know I can force it to a float by doing:
SELECT CAST(5 AS FLOAT)/CAST(2 AS FLOAT);
but that seems like overkill. I find that I can just as easily (and much more readably) get the same result by using
SELECT (0.0+5)/2;
I'm guessing that this is just some sort of implicit type-casting? Is there some reason either method is better/worse?