tags:

views:

78

answers:

3

what's the differece between a sql server type of:

float and numeric

+1  A: 

numeric is a decimal (base-10) fixed-point datatype; float is a binary (base-2) floating-point datatype.

A numeric[18,10] defines a decimal with precision (maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point) 18 and scale (maximum number of decimal digits that can be stored to the right of the decimal point) 10. It consumes 9 bytes of storage to a float's default 8 bytes.

Here's a starting point for more reading.

Michael Petrotta
Numeric is a fixed-point type, not a floating-point type.
Thom Smith
@Thom - right you are. Thanks.
Michael Petrotta
+3  A: 

FLOAT conforms to IEEE 754 and approximates decimal representation.

NUMERIC is exact in decimal representation (up to the declared precision).

SELECT  CAST(PI() AS FLOAT),
        CAST(PI() AS NUMERIC(20, 18)),
        CAST(PI() AS NUMERIC(5, 3))


---------------------- --------------------------------------- ---------------------------------------
3,14159265358979       3.141592653589793100                    3.142
Quassnoi
The precision is only approximate when measured in decimal places. It is exact in bits.
Daniel Pryden
so for $, float is fine right?
mrblah
@mrblah if your floats are small.
Stefan Mai
@mrblah: float is completely inappropriate for monetary data. There's a `money` datatype much better suited for this, that will avoid rounding errors associated with float.
Michael Petrotta
A: 

float is defined as a binary floating-point number.

These are much more efficient to work with in binary computers than decimal floating-point numbers (in fact, most math operations on floats are implemented in hardware), and can be highly precise. However, since the precision is measured in bits, not decimal places, floats are not ideal for use with algorithms that depend on the decimal representation of a number (e.g. financial applications).

A couple of good references are Wikipedia's page on IEEE 754 (the floating-point standard), and David Goldberg's ACM article What Every Computer Scientist Should Know About Floating-Point Arithmetic.

Daniel Pryden