views:

249

answers:

9

Possible Duplicate:
How do Trigonometric functions work?

What actually goes into the computation of trig functions like Sin, Cos, Tan and Atan?

I think I've found an optimization in my code where I can avoid using any of these functions and base the problem around slope instead of angles. So that means a couple division operations in place of the above trig functions. But I'd like to know more about what goes into those trig functions so that I can compare my new code (from the perspective of number of basic math ops). Or maybe I've just found a more circuitous way of doing the same thing, or worse, introduced a less efficient method.

Using C++ and Python but I imagine these is fairly language agnostic with math operation cost being relative to the most primitive operations.

A: 

Most trig functions are implemented as lookup tables these days.

codekaizen
!!!!!!!!!!!!!!!
High Performance Mark
A: 

My experience from trigonometric functions are that they are extremely fast, and that most of them are implemented as lookup tables anyway... That is, a few divisions and division-by-zero checks will probably be slower than a call to a trigonometric function.

aioobe
Faster than division? I guess that's comparing two completely different things and that becomes a question of architecture specifics... If it really is a lookup table, that would imply my old method with pure trig functions was faster, no?
AK
I would think so, yes. A lookup table requires a memory access though, unless you get a hit on some level of the cache.
aioobe
@aioobe, I'm pretty sure the lookup table for many systems is implemented on-chip, so that no memory access is required. I think an error in the lookup table was the cause of the infamous Pentium math problem some years ago.
Jeffrey L Whitledge
@Jeffrey, you have a link for that math-problem?
aioobe
@aioobe: http://en.wikipedia.org/wiki/Pentium_FDIV_bug : "...missing entries in the lookup table used by the divide operation..."
timday
Oh, right, I thought you ment error in the trig-lookup-method and that didn't sound familiar. Now I get it :-)
aioobe
+5  A: 

Modern x86 processors include trig functions in their instruction set, but they take many cycles to execute. So if you're on such a processor, and if you have no dependencies in your code (i.e. you don't need the result of one sin computation in order to start the next one), then you probably won't get much faster than using sin and cos directly, as they will be fully pipelined, achieving an effective rate of 1 per cycle.

Oli Charlesworth
I believe the full trig functions are much slower than 1 per cycle. You may be able to do other things while waiting for the trig function to complete, but the trig operations themselves are not pipelined that way.
comingstorm
So, you may be able to speed things up by using an approximation, or you may not -- depending on your application, and how much approximation you can take.
comingstorm
Also, reworking your math to avoid the trig can sometimes speed up and simplify your code a great deal, completely independent of the cost of the trig operations themselves.
comingstorm
+5  A: 

You need to profile your code!

You need to profile this yourself. Based on my results, trig functions take about 100 ns and divisions about 20 ns. That can be easily converted to answers. But again, the most important thing is that you profile this on your hardware. That way you get exactly right answers and knowledge for your system.

Peter
Of course that will get the job done, but it won't really tell me why it's faster.
AK
+1  A: 

Use taylors series to approximate them. Go to about 20! to get lots of accuracy

calccrypto
this will be far slower.
Peter
really? wont it just be 1+x + x^2/2... oh.... okay
calccrypto
I'd guess the FPU does just that, but without all the looping overhead - why (except for educative purposes) would you ever do that?
nikie
A: 

(This was originally a comment on codekaizen's answer, but It got rather long....)

(Codekaizen): Most trig functions are implemented as lookup tables these days.

um.. Since most trig function take a double precision argument, looking up the value isn't practical. I believe most look up the integers on either side and then interpolate from there (i.e. Sin(5.279) is 27.9% the way from Sin(5) to Sin(6)). That less work that calculating the value directly, but still a fair amount of calculations.

James Curran
I've never seen a table where each distinct double value is stored.
codekaizen
@codekaizen: Neither have I. My point is that your answer implements that you have.
James Curran
@James Curran: I can't see how you can logically entail that. Given a table of values doesn't imply all possible values. A moment's reflection should yield that a table covering all values is just unreasonable.
codekaizen
Do you have any source to back that up? I highly doubt that. That would mean that calculating e.g. the 2nd derivative of sin(x) numerically would be 0! Calculating the tayler approximation converges very quickly if you map x to a value between [0..pi/4]
nikie
It's not that bad - firstly you only need 0-45deg because of symmetry. And you can use identities to work out sin(a+b) from sin(a)+sin(b) so you only need to store a very sparse table,
Martin Beckett
A: 

The only real answer you're going to get it "Profile."

Most likely, if this is not a bottleneck in your code it will make no noticeable difference whatsoever.

BlueRaja - Danny Pflughoeft
A: 

Take a look at glibc. It uses several different implementations some of them (like sysdeps/ieee754/s_sin.c) look really complicated while others use an assembly instruction (like sysdeps/x86_64/fpu/s_sincos.S). It is hard to tell the actual time required without some measurements.

stribika
A: 

The near optimal method (and the method typically used) for evaluating trigonometric functions is through an orthogonal polynomial expansion (Chebyshev series.) Such a series with an appropriate number of terms will be faster than table lookup.

http://en.wikipedia.org/wiki/Chebyshev_polynomials

drivel