views:

1085

answers:

4

I am having some weird issue here. I have a database table which has huge value stored on a column. My application (C#) is reading this value and keeping in a double type. This application will insert the same value to another table. Note : I am not doing any calculations/processing on the value read from the first table. It is just kept for updating the second table.

Issue here is, the second table is getting slightly different value than in the first table. Looks like the number is rounding off when I keep in the double type.

Here is an example of values.

Original Value : 18014398509481984

Value copied to new table : 18014398509482000

The values looks different, but both are same in reality. I did a google search with 18014398509481984 - 18014398509482000 as a search term and it returned result 0, which means both are same.

Questions:

1 - If both are same, why the second value looks different? I can see 1984 turned into 2000.

2 - Why the conversion happens?

3 - How can I avoid this type of conversions?

Any help would be great!

+3  A: 

A double precision value is accurate only to 15 or 16 decimal digits (see here for an explanation). If you need to store more than this, then you will have to use a different number format. If you want to work with very big integers without losing accuracy, then there are various classes out there to help you like this one.

If you're getting a value out of SQL, make sure that your target data type in .NET matches - SQL bigint for C# long for example - to avoid rounding issues like this.

David M
My tables type is NUMBER and I using Oracle database
Appu
+2  A: 

I believe this is due to floating point precision (the large number will use a mantissa an exponent), which means it would essentially be represented as a factional number with a power. Fractional numbers however encounter rounding errors due to floating point arithmetic.

Normally the way round this is to avoid floating point values (try Int64), use a more precise type (Decimal) or account for the error and do an 'approx equal to'.

Ian
Thanks. What do you mean by 'approx equal to'?
Appu
You could add an extension method to Double, or a static one, that determines if the values are say 0.001% within each other, and if so assume that they are equal.Obviously you set the threshold at a point with which you are happy.
Ian
+3  A: 

Try using a System.Decimal to store the value from the first table, instead of a System.Double. System.Double doesn't seem to contain enough significant digits to store that large of a value accurately.

Tim S. Van Haren
Thanks. I will try with decimal type.
Appu
+2  A: 

Do you need to store these as floating point numbers?

If not then you could use 64-bit integers instead: BIGINT in the database, and long/Int64 in your app.

These have a range from –9,223,372,036,854,775,808 up to 9,223,372,036,854,775,807 and no precision/accuracy issues.

LukeH
Thanks. But my value will be floating point number. So I guess int64 or long will not be appropriate.
Appu