views:

870

answers:

9

I want to test if a number double x is an integer power of 10. I could perhaps use cmath's log10 and then test if x == (int) x?

edit: Actually, my solution does not work because doubles can be very big, much bigger than int, and also very small, like fractions.

+10  A: 

Your solution sounds good but I would replace the exact comparison with a tolerance one.

double exponent = log10(value);
double rounded = floor(exponent + 0.5);
if (fabs(exponent - rounded) < some_tolerance) {
    //Power of ten
}
Andreas Brinck
And maybe replace the cast to int with a call to `floor`, I find that expresses the intend better.
Space_C0wb0y
And before someone complains this will technically include some numbers that are not powers of 10, I'll mention that IEE754 floating point notation doesn't even have exact representations for very large or very small powers of 10.
smehmood
I wanted a precise answer, not rounded. If precise answer is not possible, best to warn the user.
Helltone
for most powers of ten, there won't be an exact representation. Most of the exact comparisons will fail then.
Tobias Langner
@Helltone: if you want a precise answer, then either some_tolerance be close or equal to zero (the latter is not recommended because of the told reasons). "Exact" answers for your scenario are only possible with integral data types.
Robert
@smehmood: IEEE754 floating point notation doesn't have exact representations for any negative powers of 10. 0.1 is a continuing decimal in binary (if "decimal" is the right word to use).
David Thornley
A: 

Depending on the platform your code needs to run on the log might be very expensive.

Since the amount of numbers that are 10^n (where n is natural) is very small, it might be faster to just use a hardcoded lookup table.

(Ugly pseudo code follows:)

bool isPowerOfTen( int16 x )
{
  if( x == 10       // n=1
    || x == 100     // n=2
    || x == 1000    // n=3
    || x == 10000 ) // n=4
  return true;

  return false;
}

This covers the whole int16 range and if that is all you need might be a lot faster. (Depending on the platform.)

Andreas
The question specifically mentions *doubles*.
MadKeithV
Looks good for ints.
Narendra N
It didn't when I wrote the comment, but in that case you are certainly right.But even then changing it to BCD and just checking for a trailing zero might be faster.
Andreas
Oh and downvoting a the obviously right solution (lookup table) just because of incomplete pseudocode is helping nobody ;)
Andreas
A: 
Neeraj
A creative solution, but he is using doubles. Doubles cannot represent every number with perfect accuracy, that's why you sometimes see calculations that look like this:(1 / 3) * 3 = 0.99999999....The look-up table already suggested is preferred, because it will store the "closest" representation of the power of 10. So if 0.999999887723x10^24 is the closest you get to 1.0000*10^25, you can store the "just off a bit" value and still get a positive hit.
Edwin Buck
A: 

if you don't need it to be fast, use recursion. Pseudocode:

bool checkifpoweroften(double Candidadte)
     if Candidate>=10
         return (checkifpoweroften(Candidadte/10) 
     elsif Candidate<=0.1
         return (checkifpoweroften(Candidadte*10)
     elsif Candidate == 1
         return 1
     else 
         return 0

You still need to choose between false positives and false negatives and add tolerances accordingly, as other answers pointed out. The tolerances should apply to all comparisons, or else, for exemple, 9.99999999 would fail the >=10 comparison.

Emilio M Bumachar
This will blow the stack when Candidadte is very large or small.
Hans Passant
@nobugz: Unlikely. The recursion depth is limited by the number of bits in the exponent of `double`; typically this would recurse no more than 1023 times (but fail to produce meaningful results after 15 recursions)
MSalters
You're right, temporary blood supply restriction to the brain. Hope it's temporary...
Hans Passant
+20  A: 

A lookup table will be by far the fastest and most precise way to do this; only about 600 powers of 10 are representable as doubles. You can use a hash table, or if the table is ordered from smallest to largest, you can rapidly search it with binary chop.

This has the advantage that you will get a "hit" if and only if your number is exactly the closest possible IEEE double to some power of 10. If this isn't what you want, you need to be more precise about exactly how you would like your solution to handle the fact that many powers of 10 can't be exactly represented as doubles.

The best way to construct the table is probably to use string -> float conversion; that way hopefully your library authors will already have solved the problem of how to do the conversion in a way that gives the most precise answer possible.

Paul Crowley
Since Helltone was looking for exact matches only, that would be 16 possible matches, not 600. All the more an argument for just checking against those constants.
Christopher Creutzig
@Christopher: Actually, there are *19* exactly representable powers of 10 in double precision.
Stephen Canon
@Stephen Canon: How do you get 19? I make it 23, assuming IEEE 754 binary64 doubles. (10^0 through 10^22 are all exactly representable, but 10^23 is not, falling exactly halfway between two representable doubles.)
Mark Dickinson
@Mark Dickinson: You're right of course. I didn't actually check, having done the computation for some other reason about a year ago. Clearly I misremembered the result. `10^22 = 0x1.0f0cf064dd592p73` is the largest exactly representable power of 10 in the binary64 format.
Stephen Canon
+2  A: 
bool power_of_ten(double x) {
   if(x < 1.0 || x > 10E15) {
      warning("IEEE754 doubles can only precisely represent powers "
              "of ten between 1 and 10E15, answer will be approximate.");
   }
   double exponent;
   // power of ten if log10 of absolute value has no fractional part
   return !modf(log10(fabs(x)), &exponent);
}
Helltone
Assuming, of course, that `log10` returns the exact value, and isn't off by a whole ULP.
David Thornley
+4  A: 

I am afraid you're in for a world of hurt. There is no way to cast down a very large or very small floating point number to a BigInt class because you lost precision when using the small floating point number.

For example float only has 6 digits of precision. So if you represent 109 as a float chances are it will be converted back as 1 000 000 145 or something like that: nothing guarantees what the last digits will be, they are off the precision.

You can of course use a much more precise representation, like double which has 15 digits of precision. So normally you should be able to represent integers from 0 to 1014 faithfully.

Finally some platforms may have a long long type with an ever greater precision.

But anyway, as soon as your value exceed the number of digits available to be converted back to an integer without loss... you can't test it for being a power of ten.

If you really need this precision, my suggestion is not to use a floating point number. There are mathematical libraries available with BigInt implementations or you can roll your own (though efficiency is difficult to achieve).

Matthieu M.
A: 

how about that:

bool isPow10(double number, double epsilon)
{
    if (number > 0)
    {
        for (int i=1; i <16; i++)
        {
            if ( (number >= (pow((double)10,i) - epsilon)) && 
                (number <= (pow((double)10,i) + epsilon)))
            { 
                return true;
            }
        }
    }
    return false;
}

I guess if performance is an issue the few values could be precomputed, with or without the epsilon according to the needs.

call me Steve
A: 

A variant of this one:

double log10_value= log10(value);
double integer_value;
double fractional_value= modf(log10_value, &integer_value);

return fractional_value==0.0;

Note that the comparison to 0.0 is exact rather than within a particular epsilon since you want to ensure that log10_value is an integer.

EDIT: Since this sparked a bit of controversy due to log10 possibly being imprecise and the generic understanding that you shouldn't compare doubles without an epsilon, here's a more precise way of determining if a double is a power of 10 using only properties of powers of 10 and IEEE 754 doubles.

First, a clarification: a double can represent up to 1E22, as 1e22 has only 52 significant bits. Luckily, 5^22 also only has 52 significant bits, so we can determine if a double is (2*5)^n for n= [0, 22]:

bool is_pow10(double value)
{
    int exponent;
    double mantissa= frexp(value, &exponent);

    int exponent_adjustment= exponent/10;

    int possible_10_exponent= (exponent - exponent_adjustment)/3;

    if (possible_10_exponent>=0 && 
        possible_10_exponent<=22)
    {
        mantissa*= pow(2.0, exponent - possible_10_exponent);

        return mantissa==pow(5.0, possible_10_exponent);
    }
    else
    {
        return false;
    }
}

Since 2^10==1024, that adds an extra bit of significance that we have to remove from the possible power of 5.

MSN
Never compare a double with ==
Pavel Radzivilovsky
Better would be: double double_epsilon=1e-10; return fractional_value < double_epsilon;
Pavel Radzivilovsky
The answer is correct as given. If the fractional part has any value other than 0, the value is not a power of 10. (The fractional part of log10(10000000000.000099) is ~3.55e-15, for example.)
brone
@Pavel, did you read the last sentence in the post? Comparing against any epsilon rather than 0.0 will give an incorrect result. This is one of a few cases where you need to compare against a single value rather than a range of values.
MSN
Where is it written that log10 *must* return the exact mathematical result if that value is representable as a double? If you can show this, you win (at least my upvote). If not, then this is not guaranteed to work.
GregS
@GregS - Isn't this true for all the non-trivial functions? "Where is it written..." I am not sure it IS written, but this is the underlying assumption when using a math lib, unless it is written otherwise. Say log10 is not exact, then the result of the test will be as accurate as log10 implementation is, which, with the same level of confidence is the accuracy of all the computations along this program.
ysap
@ysap: Integers can be represented *exactly* by doubles up to a certain size. For those integers which are integral power of 10, Paul Crowley give an answer which is guaranteed to be correct. If this answer can also be so guaranteed then I will upvote it. Otherwise, it is an interesting idea but not right for this question.
GregS
Argh. This is why I get a downvote? Because log10 may not be completely accurate? FFS, then, just compute successively bigger powers of ten until either you hit the maximum power of ten representable by a double or >= value. If the power of ten is equal to value, success, otherwise, failure.
MSN
@MSN: then there's no way to get correct result. Double == comparision is not supposed to work. Your function may say that 100 is not a power of 10.
Pavel Radzivilovsky
@MSN: it is also not true that this is correct result. Floating point arithmetics is limited by precision of N bits. You may want to sacrifice last bit of precision (there's a way to sacrifice less than a bit, too) in order to be able to tell what numbers are exactly the same one and what are not, which is what the story is all about. It was discussed by Donald Knuth once in length.
Pavel Radzivilovsky
Double == works fine. It compares the bits. Anyway on VC++ at least, this bit of code gives the correct results for all integer powers of 10 from 1e0 to 1e308, even though not all powers of 10 have an exact double representation. (I guess that log10 caters for this by considering any float whose corresponding range of reals includes a power of 10 is an honorary power of 10.)
brone
@Pavel, you are correct in general but incorrect in this specific instance. I know IEEE 754 math pretty well. At least enough to reconstruct a very fast float->int converter abusing type aliasing. As brone says, comparing against 0.0 is a BITWISE comparison, i.e., I am checking to ensure that the fractional part of log10_value is EXACTLY 0. 0.0 is exactly representable as a double with all bits set to zero. You could argue that I should also check against -0.0, but that would be incorrect as no negative number can be a power of 10.
MSN
@brone: yes, it compares bits fine. It might be (and I hope sir MSN agrees) _totally meaningless_ from arithmetical point of view. For instance, IEEE754 addition is not associative (in group theory sense). Or, log(10^n)!=n is, IMO, very reasonable as well and they may differ in the last bit, because so is life, and it is totally implementation dependent, as implementations are only required to precision of the underlying type. To give any real-world meaning to comparison, one must understand precisions. Even if you can prove that in some cases 1-(1-a) == a, it is also a bad style, raising susp
Pavel Radzivilovsky
My instinct would be that log10(10^n)==n always holds, but that log10(10^n+X)!=n (where X!=0) might not. My thinking is that a floating point value, F, represents a range of real values, and if 10^n is within that range, then it will be arranged that log10(F)==n -- what else makes sense? I have to admit that averaged out over all possible implementations of log10 this may not hold -- even if these other implementations are a bit useless :) -- so I will concede the point.
brone
@brone, @Pavel, to be fair, I didn't answer the question of whether this is actually a good idea or not. It probably isn't, given that the problem itself isn't that well-formed: either you shouldn't use doubles for precise log10 calculations or you don't need that much precision to determine if a value is a power of 10. Obviously for sufficiently high values `log10(value)`==`log10(value + arbitrary_precision)`.
MSN