tags:

views:

2818

answers:

14

I've been poring through .NET disassemblies and the GCC source code, but can't seem to find anywhere the actual implementation of sin() and other math functions... they always seem to be referencing something else.

Can anyone help me find them? I feel like it's unlikely that ALL hardware that C will run on supports trig functions in hardware, so there must be a software algorithm somewhere, right?

Edit: I'm aware of several ways that functions can be calculated, and have written my own routines to compute functions using taylor series for fun. I'm curious about how real, production languages do it, since all of my implementations are always several orders of magnitude slower, even though I think my algorithms are pretty clever (obviously they're not).

+9  A: 

Yes, there are software algorithms for calculating sin too. Basically, calculating these kind of stuff with a digital computer is usually done using numerical methods like approximating the Taylor series representing the function.

Numerical methods can approximate functions to an arbitrary amount of accuracy and since the amount of accuracy you have in a floating number is finite, they suit these tasks pretty well.

Mehrdad Afshari
A real implementation probably won't use a Taylor series, since there's more efficient ways. You only need to correctly approximate in the domain [0...pi/2], and there's functions that will deliver a good approximation more efficiently than a Taylor series.
David Thornley
@David: I agree. I was careful enough to mention the word "like" in my answer. But Taylor expansion is a simple one to explain the idea behind methods that approximate functions. That said, I have seen software implementations (not sure if they were optimized) that used Taylor series.
Mehrdad Afshari
+2  A: 

They are typically implemented in software and will not use the corresponding hardware (that is, aseembly) calls in most cases. However, as Jason pointed out, these are implementation specific.

Note that these software routines are not part of the compiler sources, but will rather be found in the correspoding library such as the clib, or glibc for the GNU compiler. See http://www.gnu.org/software/libc/manual/html_mono/libc.html#Trig-Functions

If you want greater control, you should carefully evaluate what you need exactly. Some of the typical methods are interpolation of look-up tables, the assembly call (which is often slow), or other approximation schemes such as Newton-Raphson for square roots.

mnemosyn
+1  A: 

The GCC version is likely in the source code to the C Math Library (libm.a).

Judge Maygarden
Jason Orendorff
+2  A: 

For sin specifically, using Taylor expansion would give you:

sin(x) := x - x^3/3! + x^5/5! - x^7/7! + ... (1)

you would keep adding terms until either the difference between them is lower than an accepted tolerance level or just for a finite amount of steps (faster, but less precise). An example would be something like:

float sin(float x)
{
  float res=0, pow=x, fact=1;
  for(int i=0; i<5; ++i)
  {
    res+=pow/fact;
    pow*=x*x;
    fact*=(2*(i+1))*(2*(i+1)+1);
  }

  return res;
}

Note: (1) works because of the aproximation sin(x)=x for small angles. For bigger angles you need to calculate more and more terms to get acceptable results.

Blindy
+18  A: 

Functions like sine and cosine are implemented in microcode inside microprocessors. Intel chips, for example, have assembly instructions for these. A C compiler will generate code that calls these assembly instructions. (By contrast, a Java compiler will not. Java evaluates trig functions in software rather than hardware, and so it runs much slower.)

Chips do not use Taylor series to compute trig functions, at least not entirely. First of all they use CORDIC, but they may also use a short Taylor series to polish up the result of CORDIC or for special cases such as computing sine with high relative accuracy for very small angles. For more explanation, see this StackOverflow answer.

John D. Cook
mctylr
Can you be more specific? How does one "polish up" an approximation using a Taylor series?
Henry Jackson
As far as "polishing up" an answer, suppose you're computing both sine and cosine. Suppose you know the exact value of both at one point (e.g. from CORDIC) but want the value at a nearby point. Then for a small difference h, you can apply the Taylor approximations f(x + h) = f(x) + h f'(x) or f(x + h) = f(x) + h f'(x) + h^2 f''(x)/2.
John D. Cook
You have identified a duplicate.
dmckee
+7  A: 

It is a complex question. Intel-like CPU of the x86 family have a hardware implementation of the sin() function, but it is part of the x87 FPU and not used anymore in 64-bit mode (where SSE2 registers are used instead). In that mode, a software implementation is used.

There are several such implementations out there. One is in fdlibm and is used in Java. As far as I know, the glibc implementation contains parts of fdlibm, and other parts contributed by IBM.

Software implementations of transcendental functions such as sin() typically use approximations by polynomials, often obtained from Taylor series.

Thomas Pornin
SSE2 registers are *not* used to calculate sin(), neither in x86 nor in x64 mode and, of course, sin is calculated in hardware regardless of the mode. Hey, it's 2010 we live in :)
Igor Korkhov
@Igor: that depends on what math library you're looking at. It turns out that the most optimized math libraries on x86 use SSE software implementations for `sin` and `cos` that are faster than the hardware instructions on the FPU. Simpler, more naive libraries tend to use the `fsin` and `fcos` instructions.
Stephen Canon
@Stephen Canon: Do those fast libraries have 80 bit precision as FPU registers do? I have a very sneaky suspicion that they favor speed over precision, which of course is reasonable in many scenarios, for example in games. And I do believe that calculating sine with 32 bit precision by using SSE and precomputed intermediate tables might be faster than by using `FSIN` with full precision. I would be very grateful if you tell me the names of those fast libraries, it's interesting to have a look.
Igor Korkhov
@Igor: on x86 in 64-bit mode, at least on all Unix-like systems I know of, precision is limited to 64 bits, not the 79 bits of the x87 FPU. The software implementation of `sin()` happens to be about twice faster than what `fsin` computes (precisely because it is done with less precision). Note that the x87 is known to have a bit less actual precision than its announced 79 bits.
Thomas Pornin
@Thomas Pornin: thank you for the information. But does it mean that `long double` contains garbage in the last 15 binary digits or is not supported at all?
Igor Korkhov
The return type of the C library `sin` function is `double`. Carrying extra precision is allowed by the standard, but will lead to hard-to-diagnose numerical errors, and is not something that software writers should depend on, as that precision will be lost anytime the compiler needs to spill to the stack. Two of the libraries that I had in mind are the Intel math library, distributed with ICC, and the OS X system math library, both of which return a fully-accurate double-precision `sin` substantially faster than the `fsin` instruction.
Stephen Canon
@Igor Korkhov: On platforms that support `long double` but use software implementations of the trig functions, there will typically be either a software long-double `sinl` or they might use a software argument reduction followed by the hardware `fsin` or `fcos` instruction as a core approximation.
Stephen Canon
@Igor: On x86 in 64-bit mode, on Unix-like systems, the `long double` type is backed by the x87 80-bit registers; the size (as returned by `sizeof`) is 16 (hence 128 bits, 48 of which being unused). The `sin()` function returns a `double`, not a `long double`. `sinl()` uses and returns `long double`. `sinl()` is rarely used since `sin()` is twice faster, and situations which require the extra precision (which is slight) are not common.
Thomas Pornin
For many common use cases, a fully accurate double-precision `sin( )` can be more like 5-6x faster than `sinl( )`.
Stephen Canon
@Stephen Canon, @Thomas Pornin: thank you very much for the clarificaton!
Igor Korkhov
Indeed, both 32-bits and 64-bits implementations of sin() in the msvc runtime libraries do *not* use the FSIN instruction. In fact, they give different results, take for instance sin(0.70444454416678126). This will result in 0.64761068800896837 (right withing 0.5*(eps/2) tolerance) in a 32-bit program, and will result in 0.64761068800896848 (wrong) in a 64-bit one.
e.tadeu
+3  A: 

use taylor series and try to find relation between terms of the serie so you don't calculate things again and again

here is an example for cosinus :

double cosinus(double x,double prec)
{
    double t , s ;
    int p;
    p = 0;
    s = 1.0;
    t = 1.0;
    while(fabs(t/s) > prec)
    {
        p++;
        t = (-t * x * x) / ((2 * p - 1) * (2 * p));
        s += t;
    }
    return s;}

using this we can get the new term of the sum using the already used one (we avoid the factorial and x^2p)

explanation

Yassir
Did you know you can use the Google Chart API to make formulas like this using TeX? http://code.google.com/apis/chart/docs/gallery/formulas.html
Gab Royer
+2  A: 

Whenever such a function is evaluated, then at some level there is most likely either:

  • A table of values which is interpolated (for fast, inaccurate applications - e.g. computer graphics)
  • The evaluation of a series that converges to the desired value --- probably not a taylor series, more likely something based on a fancy quadrature like Clenshaw-Curtis.

If there is no hardware support then the compiler probably uses the latter method, emitting only assembler code (with no debug symbols), rather than using a c library --- making it tricky for you to track the actual code down in your debugger.

Autopulated
+10  A: 

In GNU libm, the implementation of sin is totally system-dependent. Therefore you can find the implementation, for each platform, somewhere in the appropriate subdirectory of sysdeps.

Only one of these directories seems to include an implementation in C. It was contributed by IBM and looks hard to follow. In some regions it uses the familiar Taylor series, but there's an awful lot of code. Source: sysdeps/ieee754/dbl-64/s_sin.c

The version for Intel x86 processors is written in assembly. It simply uses the FPU's built-in fsin instruction. Source: sysdeps/i386/fpu/s_sin.S

fdlibm's implementation of sin in pure C is much simpler than glibc's and is nicely commented. Source: fdlibm/s_sin.c and fdlibm/k_sin.c

Jason Orendorff
To see that this is really the code that runs on x86: compile a program that calls `sin()`; type `gdb a.out`, then `break sin`, then `run`, then `disassemble`.
Jason Orendorff
Great, this is exactly what I was looking for.
Henry Jackson
@Henry: don't make the mistake of thinking that is good code though. It's really **terrible**, don't learn to code that way!
Andreas Bonini
@Andreas Hmm, you're right, the IBM code does look pretty awful compared to fdlibm. I edited the answer to add links to fdlibm's sine routine.
Jason Orendorff
@Jason: Yeah the fdlibm implementation is simpler, but it just calls `__kernel_sin`, so it's not really an answer. @Andreas: I wasn't really looking to study the style, I was mostly just curious about how it's done. As I said in my edit, my own implementations always seem way slow.
Henry Jackson
@Henry: `__kernel_sin` is defined in k_sin.c, though, and it's pure C. Click it again—I botched the URL the first time.
Jason Orendorff
+2  A: 

I'll try to answer for the case of sin() in a C program, compiled with GCC's C compiler on a current x86 processor (let's say a Intel Core 2 Duo).

In C language the standard C library includes common math functions, not included in the language itself (e.g. pow, sin and cos for power-of, sine, and cosine respectively). The headers of which are included in math.h. Now on a GNU/Linux system, these libraries functions are provided by glibc (GNU libc or GNU C Library). But the GCC compiler wants you to link to the math library (libm.so) using the -lm compiler flag to enable usage of these math functions. I'm not sure why it isn't part of the standard C library. These would be a software version of the floating point functions, or "soft-float".

Now the compiler may optimize the standard C library function sin() (provided by libm.so) to be replaced with an call to a native instruction to your CPU/FPU's built-in sin() function, which exists as an FPU instruction (FSIN for x86/x87) on newer processors like the Core 2 series (pretty much back to the i486DX). This would depend on optimization flags passed to the gcc compiler. If the compiler was told to write code that would execute on any i386 or newer processor, it would not make such an optimization. The -mcpu=486 flag would inform the compiler that it was safe to make such an optimization.

Now if the program executed the software version of the sin() function, it would do so based on a CORDIC (COordinate Rotation DIgital Computer) or BKM algorithm algorithm, or more likely a table or power-series calculation to calculate such transcendental functions. [Src: http://en.wikipedia.org/wiki/Cordic#Application]

Any recent (since 2.9x approx.) version of gcc also offers a built-in version of sin, __builtin_sin() that it will used to replace the standard call to the C library version, as an optimization.

I'm sure that is as clear as mud, but hopefully gives you more information than you were expecting, and lots of jumping off point to learn more yourself.

mctylr
+3  A: 

The actual implementation of library functions is up to the specific compiler and/or library provider. Whether it's done in hardware or software, whether it's a Taylor expansion or not, etc., will vary.

I realize that's absolutely no help.

John Bode
+1 for a factually correct answer =)
Stephen Canon
+1  A: 

If you want to look at the actual GNU implementation of those functions in C, check out the latest trunk of glibc. See the GNU C Library.

Chris
+1  A: 

If you want an implementation in software, not hardware, the place to look for a definitive answer to this question is Chapter 5 of Numerical Recipes. My copy is in a box, so I can't give details, but the short version (if I remember this right) is that you take tan(theta/2) as your primitive operation and compute the others from there. The computation is done with a series approximation, but it's something that converges much more quickly than a Taylor series.

Sorry I can't rembember more without getting my hand on the book.

Norman Ramsey
+2  A: 

As many people pointed out, it is implementation dependent. But as far as I understand your question, you were interested in a real software implemetnation of math functions, but just didn't manage to find one. If this is the case then here you are:

  • Download glibc source code from http://ftp.gnu.org/gnu/glibc/
  • Look at file dosincos.c located in unpacked glibc root\sysdeps\ieee754\dbl-64 folder
  • Similarly you can find implementations of the rest of the math library, just look for the file with appropriate name

You may also have a look at the files with the .tbl extension, their contents is nothing more than huge tables of precomputed values of different functions in a binary form. That is why the implementation is so fast: instead of computing all the coefficients of whatever series they use they just do a quick lookup, which is much faster. BTW, they do use Tailor series to calculate sine and cosine.

I hope this helps.

Igor Korkhov