I'm wondering what sort of algorithm could be used to take something like "4.72" into a float data type, equal to
float x = 4.72;
I'm wondering what sort of algorithm could be used to take something like "4.72" into a float data type, equal to
float x = 4.72;
The atof() function can be helpful. http://www.cplusplus.com/reference/clibrary/cstdlib/atof/
For C
strtod()
and C99
friends strtof()
and strtold()
(description on same link) already have that algorithm implemented.
If you are having problems writing your own, post your code and specific questions about it.
scanf
, operator>>
for istreams, and strtof
would be the obvious choices.
There is also atof
, but, like atoi
, it lacks a way to tell you there was an error in the input, so it's generally best to avoid both.
You can use boost:lexical_cast
http://www.boost.org/doc/libs/1_44_0/libs/conversion/lexical_cast.htm
For C++ you can use boost::lexical_cast:
std::string str( "4.72" );
float x = boost::lexical_cast< float >( str );
For C you can use sscanf:
char str[]= "4.72";
float x;
sscanf( str, "%f", &x );
For C++ This is the algorithm I use:
bool FromString(const string& str, double& number) {
std::istringstream i(str);
if (!(i >> number)) {
// Number conversion failed
return false;
}
return true;
}
I used atof() in the past for the conversion, but I found this problematic because if no valid conversion can be made, it will return (0.0). So, you would not know if it failed and returned zero, or if the string actually had "0" in it.
From cplusplus.com: "stringstream
provides an interface to manipulate strings as if they were input/output streams."
You can initialize a stringstream
with your string then read a float from the stringstream
using operator>>
just like you would with cin
.
Here is an example:
#include<iostream>
#include<string>
#include<sstream>
using namespace std;
int main() {
string s = "4.72";
stringstream sstrm(s);
float x;
sstrm >> x;
cout << x << endl;
}
I assume you want an actual algorithm, not a library function that already does it. I don't have time to write and test actual code, but here is what I would do:
Due to the roundoff of converting between base 10 and base 2 in every iteration of this loop, the result you get from this algorithm may not be the closest possible binary representation to the original value. I don't really know of a good way to improve it though... perhaps someone else can chime in with that.
As you've asked for an algorithm, not a method, here is my explanation for a simple algorithm (and an implementation in C):
I guess, reading the code might be easier. So here is the code:
float atof(char *s)
{
int f, m, sign, d=1;
f = m = 0;
sign = (s[0] == '-') ? -1 : 1;
if (s[0] == '-' || s[0] == '+') s++;
for (; *s != '.' && *s; s++) {
f = (*s-'0') + f*10;
}
if (*s == '.')
for (++s; *s; s++) {
m = (*s-'0') + m*10;
d *= 10;
}
return sign*(f + (float)m/d);
}