Using Cocoa or basic c, how can I convert a double to a string representation and then convert it back to the exact same double. Readability is not important, only accuracy.
For example, will this work:
double a,b;
a=some value;
b=[[[NSNumber numberWithDouble:a] stringValue] doubleValue];
views:
199answers:
6A normal conversion to a string will almost inevitably lose accuracy. If it has to be a string, you'll probably want to put its address into a pointer to char, then write out the individual bytes in hex. Reading them back in is similar -- convert from hex to a byte, then put the bytes into memory in the correct order.
You cannot safely copy a double to a "string" because of embedded zeros. Suppose your double is held in memory as 0x43 0x63 0x00 0x00 0x00 0x00 0x00 0x01
... as a C string
that is "Cc", the last 0x01
is "lost".
You can convert a double
to a unsigned char array
and back
double val = 42.42;
unsigned char val_char[sizeof (double)];
/* ... */
/* copy double to unsigned char array */
memcpy(val_char, &val, sizeof val);
/* ... */
/* copy unsigned char array to double */
memcpy(&val, val_char, sizeof val);
To print the unsigned char array
, for example
printf("0x%02x", val_char[0]);
for (k = 1; k < sizeof (double); k++) {
printf(" 0x%02x", val_char[k]);
}
puts("");
And to read
fgets(buf, sizeof buf, stdin);
for (k = 0; k < sizeof (double); k++) {
long tmp;
char *buf_end;
tmp = strtol(buf, buf_end, 0);
buf = buf_end + 1;
char_val[k] = tmp;
}
[EDIT] Actually there is a way to do this in C99 -- use the %a
format specifier. This prints out a double in hexadecimal scientific notation, so there is no loss of precision. The result is exact. Example:
double d1 = 3.14159;
char str[64];
sprintf(str, "%a", d1);
// str is now something like "0x1.921f9f01b866ep+1"
...
double d2;
sscanf(str, "%la", &d2);
assert(d2 == d1);
Original answer below:
Most double-to-string conversions can't guarantee that. You can try printing out the double to a very high precision (a double can store about 16 decimal digits), but there's still a small chance of floating-point error:
// Not guaranteed to work:
double d = ...;
char str[64];
sprintf(str, "%.30g", d);
...
double d2;
sscanf(str, "%g", &d2);
This will work to a very high degree of precision, but it still may have an error of a few ULPs (units in the last place).
What are you doing that requires you to produce a 100% exactly reproducible value while using a string? I strongly suggest you reconsider what you're trying to do. If your really need an exact conversion, just store the double as 8 bytes in binary. If it needs to be stored in a printable format, convert the binary data into hex (16 bytes) or base 64.
There's no way to do it portably.
The solutions above -- reinterpreting the double as raw bits and printing those -- is one decent way to do it; on a related note, if you need more readability you can split the mantissa from the exponent and print both in decimal. Even these are not guaranteed to print the double to perfect precision, however: for instance, on x86 architecture the compiler will frequently optimize floating-point code to use the 128-bit SSE registers; the solutions above force it into memory where it will lose 64 bits of information.
To create and use the “normal” (i.e., decimal) string representation in Cocoa, use NSString's stringWithFormat:
method to produce a string from a double
, and NSScanner's scanDouble:
method to convert a string to a double
.
But, as Jerry Coffin said, this may lose precision. Writing out raw bytes will work on the same machine, but it's not an architecture-independent way to do it, so it'll create problems if the data ever goes from one machine to a different one.
A better way is to wrap the value in an NSNumber instance. That's a valid property list object. If you encode it as a binary plist using the NSPropertyListSerialization class, that should preserve all precision and be readable to other machines.
One way to do this is simply to generate the string representation to more significant digits than your floating-point type can handle; e.g. 17 decimal digits is sufficient to exactly represent an IEEE-754 double. This will guarantee that you get the same numeric value back (if your runtime library's conversion routines are correctly implemented) but has the downside of often giving you more digits than are necessary.
Python 3.1 recently added an algorithm that converts any floating-point value to the shortest string guaranteed to round back to the same value; you can check out the implementation if you're interested. It's pretty hairy, though.