views:

154

answers:

4
#define MAXBUF 1000
int buf[MAXBUF];
int buffered = 0;
int bufp = 0;

int getch()
{
    if(bufp > 0) {
       if(!--bufp)
          buffered = 0;

       return buf[bufp];
    }
    else {
       buffered = 0;
       return getchar();
    }

}

void ungetch(int c)
{
     buf[bufp++] = c;
     buffered = 1;
}

int getfloat(float *pn)
{
    int c, sign, sawsign;
    float power = 1.0;

    while(isspace(c=getch()))
         ;

    if(!isdigit(c) && c!= '+' && c!= '-' && c != '.') {
          ungetch(c);
          return 0;
    }

    sign = (c == '-') ? -1 : 1;

    if(sawsign = (c == '-' || c == '+'))
       c = getch();

    if(c != '.' && !isdigit(c)) {
         ungetch(c);

         if(sawsign)
            ungetch((sign == -1) ? '-' : '+');

         return 0;
    }

    for(*pn = 0.0; isdigit(c); c = getch())
        *pn = 10.0 * *pn + (float)(c - '0');

    if(c == '.')
       while(isdigit(c = getch())) {
         *pn = 10.0 * *pn + (float)(c - '0');
          power *= 10.0;
       }

    *pn *= sign;
    *pn /= power;

    ungetch(c);
    return c;
}

It always returns 23.7999 when i enter 23.8, and i have no idea why. Can anybody tell me why?

+2  A: 

23.8 can't be represented exactly given the limited accuracy of IEEE 754 floats.

Ignacio Vazquez-Abrams
It cannot be represented exactly in any binary floating point representation, even with infinite accuracy. The value in binary is 0.1100 repeating.
Merlyn Morgan-Graham
+3  A: 

Because certain floating point numbers are inherently inaccurate.

Skilldrick
if you want 23.8, use a different datatype, like integers representing tenths or many languages have a Decimal type. I think most banks use integers representing thousandths of dollars.
Karl
I tried using double as an argument and it works fine now. Why can it work fine with double then?
Tool
Rounding/conversion oddities. Converting decimal->binary(converting to float/double)->decimal(printing out the value of the float/double) gives two points of rounding. Using a float vs using a double will change the rounding/induced error, same as if some calculation rounded decimal values to 10 places vs 20 places.
Merlyn Morgan-Graham
+5  A: 

Numbers are represented in base 2, and base-2 floating-point values cannot represent every base-10 decimal value exactly. What you enter as 23.8 gets converted into its closest equivalent base-2 value, which is not exactly 23.8. When you print this approximate value out, it gets printed as 23.7999.

You are also using float, which is the smallest floating-point type, and has only 24 bits of precision (roughly 7 decimal digits). If you switch to double, the amount of bits of precision more than doubles from float, so the difference between a decimal value such as 23.8 and its double representation is much smaller. This may allow a printing routine to perform the rounding better so that you see 23.8 with double. However, the actual value in the variable is still not exactly 23.8.

As general advice, unless you have a huge number of floating-point values (making memory usage your primary concern), it is best to use double whenever you need a floating-point type. You don't get rid of all odd behavior but you're going to see less of it than with float.

jk
A: 

Because you didn't use sprintf.

ctd