tags:

views:

93

answers:

2

I am trying to convert a char array to double, but I got some sort of rounding at the last precision of the the decimal. Please see the code below.

#include <stdio.h>

#define INT32MAX 017777777777 
#define LIND 0
#define MIND 1

typedef int Int32;  
typedef unsigned int UInt32;  
typedef short int Int16;  

union _talign  
{  
    double        _doubles[2];  
    double        _double;  
    Int32         _long[4];  
    UInt32        _unsign[4];  
    Int16         _short[8];  
    float         _float[4];  
    char          _char[16];  
    long long     _BIGINT;  
};  

int main()  
{  
    union  _talign value;  
    double dval = 0.0;  
    double dmag = 1000000.0000000000;  
    int i=0;  
    char cp[8] = { 135,55,83,03,178,67,55,0};  

    for(i=0;i<8;i++)  
        value._char[i] = cp[i];  

    dval = (double)value._long[MIND];  
    dval = ((double)INT32MAX + 1.0) * dval * 2.0;  
    dval = (dval +  value._long[LIND]) / dmag ;  

    printf("Expecting dval = 15555555558.111111\n");  
    printf("Got dval = %lf\n",dval);  

    return 0;  
}

I am expecting dval to be 15555555558.111111 but the program returns me 15555555558.111113. So, anybody have an idea to get the last precision correct or any other way to do this sort of conversion. Your suggestions will be appreciated. thanks

+5  A: 

The problem is that all floating point numbers can't be represented exactly in a finite number of digits. Floating point types do the best they can, but in this case you can't represent your particular number exactly as a double. 15555555558.111113 is going to be the best you can do, unless you resort to a library for arbitrary-precision decimals.

Chris Lutz
+2  A: 

From Wikipedia, Accuracy problems

The fact that floating-point numbers cannot faithfully mimic the real numbers, and that floating-point operations cannot faithfully mimic true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers.

A simple example,

   double dval = 0.1;  
   printf("dval = %f\n", dval); // 0.1

We see 0.1 on the screen. It seems ok, right?

Lets try to display more digits,

printf("dval = %.18f\n", dval); // ?

The problem is that (not always) the number we see is not the number we have! For example, 0.1 is internally stored as an approximation: 0.1000000000000000055511512...

Read:

Nick D