I'm trying to write a function in the D programming language to replace the calls to C's strtold. (Rationale: To use strtold from D, you have to convert D strings to C strings, which is inefficient. Also, strtold can't be executed at compile time.) I've come up with an implementation that mostly works, but I seem to lose some precision in the least significant bits.
The code to the interesting part of the algorithm is below and I can see where the precision loss comes from, but I don't know how to get rid of it. (I've left out a lot of the parts of code that weren't relevant to the core algorithm to save people reading.) What string-to-float algorithm will guarantee that the result will be as close as possible on the IEEE number line to the value represented by the string.
real currentPlace = 10.0L ^^ (pointPos - ePos + 1 + expon);
real ans = 0;
for(int index = ePos - 1; index > -1; index--) {
if(str[index] == '.') {
continue;
}
if(str[index] < '0' || str[index] > '9') {
err();
}
auto digit = cast(int) str[index] - cast(int) '0';
ans += digit * currentPlace;
currentPlace *= 10;
}
return ans * sign;
Also, I'm using the unit tests for the old version, which did things like:
assert(to!(real)("0.456") == 0.456L);
Is it possible that the answers being produced by my function are actually more accurate than the representation the compiler produces when parsing a floating point literal, but the compiler (which is written in C++) always agrees exactly with strtold because it uses strtold internally for parsing floating point literals?