What is the most efficient way given to raise an integer to the power of another integer in C?
// 2^3
pow(2,3) == 8
// 5^5
pow(5,5) == 3125
What is the most efficient way given to raise an integer to the power of another integer in C?
// 2^3
pow(2,3) == 8
// 5^5
pow(5,5) == 3125
Ignoring the special case of 2 raised to a power, the most efficient way is going to be simple iteration.
int pow(int base, int pow) {
int res = 1;
for(int i=pow; i<pow; i++)
res *= base;
return res;
}
EDIT: As has been pointed out this is not the most efficient way... so long as you define efficiency as cpu cycles which I guess is fair enough.
An extremly specialized case is, when you need say 2^(-x to y), where x, is of course is negative and y is too large to do shifting on an int. You can still do 2^x in constant time by screwing with a float.
struct IeeeFloat
{
unsigned int base : 23;
unsigned int exponent : 8;
unsigned int signBit : 1;
};
union IeeeFloatUnion
{
IeeeFloat brokenOut;
float f;
};
inline float twoToThe(char exponent)
{
// notice how the range checking is already done on the exponent var
static IeeeFloatUnion u;
u.f = 2.0;
// Change the exponent part of the float
u.brokenOut.exponent += (exponent - 1);
return (u.f);
}
You can get more powers of 2 by using a double as the base type. (Thanks a lot to commenters for helping to square this post away).
There's also the possibility that learning more about Ieee floats, other special cases of exponentiation might present themselves.
Exponentiation by squaring.
int ipow(int base, int exp)
{
int result = 1;
while (exp)
{
if (exp & 1)
result *= base;
exp >>= 1;
base *= base;
}
return result;
}
This is the standard method for doing modular exponentiation for huge numbers in asymmetric cryptography.
int pow( int base, int exponent)
{
if (exponent == 0) return 1; // base case;
int temp = pow(base, exponent/2);
if (exponent % 2 == 1) return temp * temp;
else return (base * temp * temp);
}
Note that exponentiation by squaring is not the most optimal method. It is probably the best you can do as a general method that works for all exponent values, but for a specific exponent value there are might be a better method.
For instance, if you want to x^15, the method of exponentiation will give you:
x^15 = (x^7)*(x^7)*x
x^7 = (x^3)*(x^3)*x
x^3 = x*x*x
This is a total of 6 multiplications.
It turns this can be done using "just" 5 multiplications.
n*n = n^2
n^2*n = n^3
n^3*n^3 = n^6
n^6*n^6 = n^12
n^12*n^3 = n^15
I don't remember the source now, but I vaguely remember that there are no efficient algorithms to find this optimal sequence of multiplications.
When you say "efficiency," you need to specify efficient in relation to what. Speed? Memory usage? Code size? Maintainability?
There is an exhaustive discussion of this topic in:
D.E. Knuth, The Art of Computer Programming, Vol. 2: Seminumerical Algorithms, 2nd ed. Addison-Wesley, Reading, MA, 1981.
In terms of the number of multiplications required, there is an algorithm there that is superior to repeated squaring, but what the most optimal method will be depends on your exact application.
Just as a follow up to comments on the efficiency of exponentiation by squaring.
The advantage of that approach is that it runs in log(n) time. For example, if you were going to calculate something huge, such as x^1048575 (2^20 - 1), you only have to go thru the loop 20 times, not 1 million+ using the naive approach.
Also, in terms of code complexity, it is simpler than trying to find the most optimal sequence of multiplications, a la Pramod's suggestion.
Edit:
I guess I should clarify before someone tags me for the potential for overflow. This approach assumes that you have some sort of hugeint library.
There are a few decent answers above, such as the one by PeterAllenWebb (citing Knuth) and the one by Pramod (the terminology he was missing though was "addition chains").
Exponentiation by Squaring is indeed not optimal. Here is one very clever method:
"Fast Exponentiation Using Data Compression" by Yacov Yacobi, SIAM Journal on Computing, Volume 28 , Issue 2 (April 1999), Pages: 700 - 703 .
I am working on an efficent algorithm here...
Now, I don't know if this would be computationally expensive...
I was look on the lines that 3^4 = 81... instead of iterating 4 times or recursive calling the function 4 times we know that 3*3 = 9 and 9^2 = 81. This gives us the approach to say that 3^4 = 9^2 going through the iterations 2 times is much more efficent then 4, but how and where in the function could we determine this without too much overhead?