What is the actual data type, and how are you displaying the values? It might actually be the same value displayed in different ways.
If you mean the MSSQL data type float
, that corresponds to the double
data type in the .NET framework, it has a precision of about 15 digits.
If you mean the MSSQL data type real
, that corresponds to the float
data type in the .NET framework, it has a precision of about 7 digits.
As either data type has a limited precision, you can't expect to get the exact value back that you assign to it. The value stored is the closest value that the data type can represent, so it will often be rounded off where the precision ends. If you store the value 26.1295 in a single precision field, it may actually end up as 26.129499435424805 as that is as close that you can get with the precision of the field.
How you see the value when you get it back depends on how you display it. Usually the code that turns the value back into a text representation also has rounding in it so that it stops before the precision ends, and you never see the difference.
If you store the value as a single precision number in the database, but convert it to a double precision number when you read it from the database, you may see the value as it's stored instead of rounded at the edge of the precision. The widening conversion will increase the precision but it can not add any more information.