views:

4107

answers:

7

So in high school math, and probably college, we are taught how to use trig functions, what they do, and what kinds of problems they solve. But they have always been presented to me as a black box. If you need the Sine or Cosine of something, you hit the sin or cos button on your calculator and you're set. Which is fine.

What I'm wondering is what actually happens inside that black box.

Thanks

+9  A: 

I believe they're calculated using Taylor Series or CORDIC. Some applications which make heavy use of trig functions (games, graphics) construct trig tables when they start up so they can just look up values rather than recalculating them over and over.

Jon Galloway
+47  A: 

First, you have to do some sort of range reduction. Trig functions are periodic, so you need to reduce arguments down to a standard interval. For starters, you could reduce angles to be between 0 and 360 degrees. But by using a few identities, you realize you could get by with less. If you calculate sines and cosines for angles between 0 and 45 degrees, you can bootstrap your way to calculating all trig functions for all angles.

Once you've reduced your argument, most chips use a CORDIC algorithm to compute the sines and cosines. You may hear people say that computers use Taylor series. That sounds reasonable, but it's not true. The CORDIC algorithms are much better suited to efficient hardware implementation. (Software libraries may use Taylor series, say on hardware that doesn't support trig functions.) There may be some additional processing, using the CORDIC algorithm to get fairly good answers but then doing something else to improve accuracy.

There are some refinements to the above. For example, for very small angles theta (in radians), sin(theta) = theta to all the precision you have, so it's more efficient to simply return theta than to use some other algorithm. So in practice there is a lot of special case logic to squeeze out all the performance and accuracy possible. Chips with smaller markets may not go to as much optimization effort.

John D. Cook
Great answer -- though the CORDIC doesn't really need range reduction per se (in fact it is essentially a range reduction algorithm in its own right); it works fine for angles between -pi/2 and +pi/2, so you just have to do a 180 degree vector rotation for angles outside that range.
Jason S
+2  A: 

Trigonometric Functions defined. Sorry to hear that you got a taste of the "just punch the button" mathematical "instruction". I had to deal with that in a course on Linear Algebra, and I believe it's the world's worst form of instruction in math (or any other subject for that matter).

Harper Shelby
If his teachers were as helpful as you, it's not surprising.
PeterAllenWebb
@PeterAllenWebb: Pointing someone to a thorough explanation of exactly how the various trigonometric functions are defined, and common methods of calculating their values is not helpful? Particularly when compared to the "just punch the sin button" method the OPs teachers (per the post) used?
Harper Shelby
@Jurassic_C: I can live with that. In fact, I edited to more correctly reflect my opinion on the matter. I certainly wasn't aiming to insult the OP - the OPs math instructors, perhaps, but not the OP.
Harper Shelby
@jurassic_c -- agreed. SO should say, "you've just linked to a wikipedia article, are you sure you want to do that?" anytime you post with that crap, and doc points.
nlucaroni
@Harper: No worries. Since the reword, my comment didn't really apply so I deleted it
Jurassic_C
@nlucaroni: If the wikipedia article is a comprehensive answer to the question, how in the world is that "crap"?
Harper Shelby
+4  A: 

Check out the Wikipedia article on trig functions. A good place to learn about actually implementing them in code is Numerical Recipes.

I'm not much of a mathematician, but my understanding of where sin, cos, and tan "come from" is that they are, in a sense, observed when you're working with right-angle triangles. If you take measurements of the lengths of sides of a bunch of different right-angle triangles and plot the points on a graph, you can get sin, cos, and tan out of that. As Harper Shelby points out, the functions are simply defined as properties of right-angle triangles.

A more sophisticated understanding is achieved by understanding how these ratios relate to the geometry of circle, which leads to radians and all of that goodness. It's all there in the Wikipedia entry.

Parappa
+8  A: 

edit: Jack Ganssle has a decent discussion in his book on embedded systems, "The Firmware Handbook".

FYI: If you have accuracy and performance constraints, Taylor series should not be used to approximate functions for numerical purposes. (Save them for your Calculus courses.) They make use of the analyticity of a function at a single point, e.g. the fact that all its derivatives exist at that point. They don't necessarily converge in the interval of interest. Often they do a lousy job of distributing the function approximation's accuracy in order to be "perfect" right near the evaluation point; the error generally zooms upwards as you get away from it. And if you have a function with any noncontinuous derivative (e.g. square waves, triangle waves, and their integrals), a Taylor series will give you the wrong answer.

The best "easy" solution, when using a polynomial of maximum degree N to approximate a given function f(x) over an interval x0 < x < x1, is from Chebyshev approximation; see Numerical Recipes for a good discussion. Note that the Tj(x) and Tk(x) in the Wolfram article I linked to used the cos and inverse cosine, these are polynomials and in practice you use a recurrence formula to get the coefficients. Again, see Numerical Recipes.

edit: Wikipedia has a semi-decent article on approximation theory. One of the sources they cite (Hart, "Computer Approximations") is out of print (& used copies tend to be expensive) but goes into a lot of detail about stuff like this. (Jack Ganssle mentions this in issue 39 of his newsletter The Embedded Muse.)

edit 2: Here's some tangible error metrics (see below) for Taylor vs. Chebyshev for sin(x). Some important points to note:

  1. that the maximum error of a Taylor series approximation over a given range, is much larger than the maximum error of a Chebyshev approximation of the same degree. (For about the same error, you can get away with one fewer term with Chebyshev, which means faster performance)
  2. Range reduction is a huge win. This is because the contribution of higher order polynomials shrinks down when the interval of the approximation is smaller.
  3. If you can't get away with range reduction, your coefficients need to be stored with more precision.

Don't get me wrong: Taylor series will work properly for sine/cosine (with reasonable precision for the range -pi/2 to +pi/2; technically, with enough terms, you can reach any desired precision for all real inputs, but try to calculate cos(100) using Taylor series and you can't do it unless you use arbitrary-precision arithmetic). If I were stuck on a desert island with a nonscientific calculator, and I needed to calculate sine and cosine, I would probably use Taylor series since the coefficients are easy to remember. But the real world applications for having to write your own sin() or cos() functions are rare enough that you'd be best off using an efficient implementation to reach a desired accuracy -- which the Taylor series is not.

Range = -pi/2 to +pi/2, degree 5 (3 terms)

  • Taylor: max error around 4.5e-3, f(x) = x-x3/6+x5/120
  • Chebyshev: max error around 7e-5, f(x) = 0.9996949x-0.1656700x3+0.0075134x5

Range = -pi/2 to +pi/2, degree 7 (4 terms)

  • Taylor: max error around 1.5e-4, f(x) = x-x3/6+x5/120-x7/5040
  • Chebyshev: max error around 6e-7, f(x) = 0.99999660x-0.16664824x3+0.00830629x5-0.00018363x7

Range = -pi/4 to +pi/4, degree 3 (2 terms)

  • Taylor: max error around 2.5e-3, f(x) = x-x3/6
  • Chebyshev: max error around 1.5e-4, f(x) = 0.999x-0.1603x3

Range = -pi/4 to +pi/4, degree 5 (3 terms)

  • Taylor: max error around 3.5e-5, f(x) = x-x3/6+x5
  • Chebyshev: max error around 6e-7, f(x) = 0.999995x-0.1666016x3+0.0081215x5

Range = -pi/4 to +pi/4, degree 7 (4 terms)

  • Taylor: max error around 3e-7, f(x) = x-x3/6+x5/120-x7/5040
  • Chebyshev: max error around 1.2e-9, f(x) = 0.999999986x-0.166666367x3+0.008331584x5-0.000194621x7
Jason S
This comment is wrong. There is a time and a place for every approximation. If you do not know enough analysis to determine the region of convergence for ANY series approximation, you should NOT be using it. That goes for Taylor, Chebyshev, Padé, etc. series. Taylor series are often Good Enough.
kquinn
Downvoting for inappropriate use of "never". Yes, Taylor series are worse than minimax polynomials on intervals, but sometimes you *are* interested in asymptotic accuracy around a single point. Further, Taylor series are usually the way to go for arbitrary-precision arithmetic.
fredrikj
:shrug: I don't know about you but I've never been interested in evaluating a function in a small neighborhood around just one point. Even a quick least-squares fit over an interval is pretty damn easy to do. Anyone who's using Taylor series is just missing the point.
Jason S
@kquinn: the region of convergence for Chebyshev approximations isn't a useful concept since the interval over which they are calculated is an explicit input to the process.
Jason S
clarifications added.
Jason S
Upvoting because the responder knew Hart exists. :smile: Hart is the classic reference here, even if it was difficult to find when I bought a copy (in print) 25 years ago. It is worth every penny.Range reduction wherever possible, coupled with an appropriate approximation, be it Pade, Chebychev, even Taylor series as appropriate, is a good approach. Pade or Chebychev approximants are usually the better choice over a Taylor series though.
woodchips
A: 

If your asking for a more physical explanation of sin, cos, and tan consider how they relate to right-angle triangles. The actual numeric value of cos(lambda) can be found by forming a right-angle triangle with one of the angles being lambda and dividing the length of the triangles side adjacent to lambda by the length of the hypotenuse. Similarily for sin use the opposite side divided by the hypotenuse. For tangent use the opposite side divided by the adjacent side. The classic memonic to remember this is SOHCAHTOA (pronounced socatoa).

jeffD