views:

718

answers:

6

I feel like I must just be unable to find it. Is there any reason that the c++ pow function does not implement the "power" function for anything except floats and doubles?

I know the implementation is trivial, I just feel like I'm doing work that should be in a standard library. A robust power function (ie handles overflow in some consistent, explicit way) is not fun to write.

+5  A: 

Because there's no way to represent all integer powers in an int anyways:

>>> print 2**-4
0.0625
Ignacio Vazquez-Abrams
For a finite sized numeric type, there's no way to represent all powers of that type within that type due to overflow. But your point about negative powers is more valid.
Chris Lutz
I see negative exponents as something a standard implementation could handle, either by taking an unsigned int as the exponent or returning zero when a negative exponent is provied as input and an int is the expected output.
Dan O
or have seperate `int pow(int base, unsigned int exponent)` and `float pow(int base, int exponent)`
Wallacoloo
They could just declare it as undefined behavior to pass a negative integer.
Johannes Schaub - litb
On all modern implementations, anything beyond `int pow(int base, unsigned char exponent)` is somewhat useless anyway. Either the base is 0, or 1, and the exponent doesn't matter, it's -1, in which case only the last bit of exponent matters, or `base >1 || base< -1` in which case `exponent<256` on penalty of overflow.
MSalters
A: 

pow(double, double), powl(long, long), and powf(float, float) are in math.h of the standard C libraries, and should therefore be available for use in C++.

Link to docs.

alesplin
Hmm, "powl" seems to be the "long double" version, instead of a "long int" version. Darnit, pow is scary.
Johannes Schaub - litb
@Johannes: correct, `powl( )` is the `long double` variant of `pow( )`.
Stephen Canon
+2  A: 

For any fixed-width integral type, nearly all of the possible input pairs overflow the type, anyway. What's the use of standardizing a function that doesn't give a useful result for vast majority of its possible inputs?

You pretty much need to have an big integer type in order to make the function useful, and most big integer libraries provide the function.

Stephen Canon
A: 

Uhm,

$ cat test.c 
#include <stdio.h>
#include <math.h>

int main() {
    printf("%f\n", pow(2, -4));
    return 0;
}
$ gcc -lm test.c # -lm is used to include the math library
$ ./a.out 
0.062500
Felix
The questioner was asking about a `pow` function that takes integer arguments, not floating-point.
Stephen Canon
Ah, I see. But why does it matter? The arguments are cast to float/double when you call it, it does not matter if they are in fact ints. As for the return value, you can cast that, too.
Felix
It matters because a good integer implementation is a lot more efficient. However, getting if fast _and right_ is non-trivial.
MSalters
@Felix Simply converting the return value to an integral could also lead to a rounding problem. (Note, I didn't downvote this, just responding to the comments)
+1  A: 

Perhaps because the processor's ALU didn't implement such a function for integers, but there is such an FPU instruction (as Stephen points out, it's actually a pair). So it was actually faster to cast to double, call pow with doubles, then test for overflow and cast back, than to implement it using integer arithmetic.

(for one thing, logarithms reduce powers to multiplication, but logarithms of integers lose a lot of accuracy for most inputs)

Stephen is right that on modern processors this is no longer true, but the C standard when the math functions were selected (C++ just used the C functions) is now what, 20 years old?

Ben Voigt
I don't know of any current architecture with a FPU instruction for `pow`. x86 has a `y log2 x` instruction (`fyl2x`) that can be used as the first part of a `pow` function, but a `pow` function written that way takes hundreds of cycles to execute on current hardware; a well written integer exponentiation routine is several times faster.
Stephen Canon
I don't know that "hundreds" is accurate, seems to be around 150 cycles for fyl2x then f2xm1 on most modern CPUs and that gets pipelined with other instructions. But you're right that a well-tuned integer implementation should be much faster (these days) since IMUL has been sped up a lot more than the floating-point instructions. Back when the C standard was written, though, IMUL was pretty expensive and using it in a loop probably did take longer than using the FPU.
Ben Voigt
Changed my vote in light of the correction; still, keep in mind (a) that the C standard underwent a major revision (including a large expansion of the math library) in 1999, and (b) that the C standard isn't written to any specific processor architecture -- the presence or absence of FPU instructions on x86 has essentially nothing to do with what functionality the C committee choses to standardize.
Stephen Canon
It's not tied to any architecture, true, but the relative cost of a lookup table interpolation (generally used for the floating point implementation) compared to integer multiply has changed pretty much equally for all architectures I would guess.
Ben Voigt
A: 

BECAUSE EXPONENTS AND POWERS ARE NOT TYPES OF DATA TYPES.

DENNIS AGABA