tags:

views:

692

answers:

5

I'm trying to determine the asymptotic run-time of one of my algorithms, which uses exponents, but I'm not sure of how exponents are calculated programmatically.

I'm specifically looking for the pow() algorithm used for double-precision, floating point numbers.

+1  A: 

The usual approach, to raise a to the b, for an integer exponent, goes something like this:

result = 1
while b > 0
  if b is odd
    result *= a
    b -= 1
  b /= 2
  a = a * a

It is generally logarithmic in the size of the exponent. The algorithm is based on the invariant "a^b*result = a0^b0", where a0 and b0 are the initial values of a and b.

For negative or non-integer exponents, logarithms and approximations and numerical analysis are needed. The running time will depend on the algorithm used and what precision the library is tuned for.

Edit: Since there seems to be some interest, here's a version without the extra multiplication.

result = 1
while b > 0
  while b is even
    a = a * a
    b = b / 2
  result = result * a
  b = b - 1
Glomek
There seems to be a couple bugs in your pseudo code. result is updated only when b is odd and when (b == 1) at the top of the while loop there will be an extra multiplication.
Michael Burr
I think your approach would be good for bignum exponentiation, but isn't quite as applicable to floating point exponentiation, for which there is a constant-time way to calculate this.
Chris Jester-Young
+7  A: 

I've had a chance to look at fdlibm's implementation. The comments describe the algorithm used:

 *                    n
 * Method:  Let x =  2   * (1+f)
 *      1. Compute and return log2(x) in two pieces:
 *              log2(x) = w1 + w2,
 *         where w1 has 53-24 = 29 bit trailing zeros.
 *      2. Perform y*log2(x) = n+y' by simulating muti-precision
 *         arithmetic, where |y'|<=0.5.
 *      3. Return x**y = 2**n*exp(y'*log2)

followed by a listing of all the special cases handled (0, 1, inf, nan).

The most intense sections of the code, after all the special-case handling, involve the log2 and 2** calculations. And there are no loops in either of those. So, the complexity of floating-point primitives notwithstanding, it looks like a asymptotically constant-time algorithm.

Floating-point experts (of which I'm not one) are welcome to comment. :-)

Chris Jester-Young
How do you know if step 1 has constant time? Did you read the code for log2? And how about step 3, is that really e-based exp or is it exp2, and did you read the code for it?
Windows programmer
The code doesn't call log2 or exp or exp2. It implements all those things inline.
Chris Jester-Young
A: 

If I were writing a pow function targeting Intel, I would return exp2(log2(x) * y). Intel's microcode for log2 is surely faster than anything I'd be able to code, even if I could remember my first year calculus and grad school numerical analysis.

Windows programmer
+1  A: 

Unless they've discovered a better way to do it, I believe that approximate values for trig, logarithmic and exponential functions (for exponential growth and decay, for example) are generally calculated using arithmetic rules and Taylor Series expansions to produce an approximate result accurate to within the requested precision. (See any Calculus book for details on power series, Taylor series, and Maclaurin series expansions of functions.) Please note that it's been a while since I did any of this so I couldn't tell you, for example, exactly how to calculate the number of terms in the series you need to include guarantee an error that small enough to be negligible in a double-precision calculation.

For example, the Taylor/Maclaurin series expansion for e^x is this:

      +inf [ x^k ]           x^2    x^3      x^4        x^5
e^x = SUM  [ --- ] = 1 + x + --- + ----- + ------- + --------- + ....
      k=0  [  k! ]           2*1   3*2*1   4*3*2*1   5*4*3*2*1

If you take all of the terms (k from 0 to infinity), this expansion is exact and complete (no error).

However, if you don't take all the terms going to infinity, but you stop after say 5 terms or 50 terms or whatever, you produce an approximate result that differs from the actual e^x function value by a remainder which is fairly easy to calculate.

The good news for exponentials is that it converges nicely and the terms of its polynomial expansion are fairly easy to code iteratively, so you might (repeat, MIGHT - remember, it's been a while) not even need to pre-calculate how many terms you need to guarantee your error is less than precision because you can test the size of the contribution at each iteration and stop when it becomes close enough to zero. In practice, I do not know if this strategy is viable or not - I'd have to try it. There are important details I have long since forgotten about. Stuff like: machine precision, machine error and rounding error, etc.

Also, please note that if you are not using e^x, but you are doing growth/decay with another base like 2^x or 10^x, the approximating polynomial function changes.

gsarnold
Yes they've discovered better ways to do it, for suitable definitions of "they".
Windows programmer
A: 

You can use exp(n*ln(x)) for calculating xn. Both x and n can be double-precision, floating point numbers. Natural logarithm and exponential function can be calculated using Taylor series. Here you can find formulas: http://en.wikipedia.org/wiki/Taylor_series

akalenuk