views:

1380

answers:

7

Let's say you had to calculate the sine (cosine or tangent - whatever) where the domain is between 0.01 and 360.01. (using C#)

What would be more performant?

  1. Using Math.Sin
  2. Using a lookup array with precalculated values

I would anticpate that given the domain, option 2 would be much faster. At what point in the precision of the domain (0.0000n) does the performance of the calculation exceed the lookup.

A: 

Math.Sin is faster. The people who wrote are smart and use table lookups when they are accurate and faster and use the math when that is faster. And there's nothing about that domain that makes it particularily faster, the first thing most trig function implementations do is to map down to a favorable domain anyway.

RBarryYoung
a domain of 36000 possible values being looked up is much different than a domain of 360000000000000 values.
mson
Not every situation needs the same precision. The people who wrote the functions are smart but not magical.
Nosredna
Ah, I see, you were talking about the precision of the Domain, not its range.
RBarryYoung
Hmm, that is, the Domain's precision, rather the Domain's range.
RBarryYoung
+6  A: 

For performance questions, the only right answer is the one you reach after testing. But, before you test, you need to determine whether the effort of the test is worth your time - meaning that you've identified a performance issue.

If you're just curious, you can easily write a test to compare the speeds. However, you'll need to remember that using memory for the lookup table can affect paging in larger apps. So, even if paging is faster in your small test, it could slow things down in a larger app that uses more memory.

John Fisher
+9  A: 

It used to be that an array lookup was a good optimization to perform fast trig calculations.

But with cache hits, built-in math coprocessors (which use table lookups) and other performance improvements, it might be best to time your specific code yourself to determine which will perform better.

Robert Cartaino
i would guess that a lookup takes much less processing than actually calculating a sin value. are you certain that calculating the sin(90.00001) is faster than reading sin(90.0) as 0 from a small array? a priori - it seems like baloney...
mson
I used to use memoization (tabling) *all the time* to speed up graphic routines (lots of sines/cosines). When they added math co-processors to the CPU (which use table look-ups) the calculations could all be done in hardware and became less of an issue. Now, with on board-caches, smaller code blocks can give you a significant performance boost. If the of memory used to store the table causes cache misses, the performance loss can be significant. It's not a clear-cut issue anymore. You almost *have* to test your specific code to find out.
Robert Cartaino
mson, read this answers main point: Measure.
Henk Holterman
+1 to Robert for suggesting you write it both ways and test.
Nosredna
For my DSP uses, the built in sin is only my choice in initialization code, never during run-time. Instead, I use LUTs and various approximations. I have to decide for each application which is the better choice.
Nosredna
@Nosredna - Yes. If I have a finite, well-defined set of numbers that I can pre-calculate at initialization, then putting them in a look-up table for faster run-time performance is often a good option. Timing your *actual* application is key.
Robert Cartaino
Even without a finite set of numbers, an interpolating assembly LUT will almost always beat built-in sin for my needs.
Nosredna
@henk lol - so the reason i asked the question is to get responses from people who've run into this issue, not to get conjecture... the answer of 'go measure it' i interpret as - i have little idea or experience in the matter. the helpful part of robert's response is he's done this type of calculation before and that it could potentially go either way.
mson
+7  A: 

Update: read through to the end. It looks like the lookup table is faster than Math.Sin after all.

I would guess that the lookup approach would be faster than Math.Sin. I would also say that it would be a lot faster, but Robert's answer made me think that I would still want to benchmark this to be sure. I do a lot of audio buffer processing, and I've noticed that a method like this:

for (int i = 0; i < audiodata.Length; i++)
{
    audiodata[i] *= 0.5; 
}

will execute significantly faster than

for (int i = 0; i < audiodata.Length; i++)
{
    audiodata[i] = Math.Sin(audiodata[i]);
}

If the difference between Math.Sin and a simple multiplication is substantial, I would guess that the difference between Math.Sin and a lookup would also be substantial.

I dunno, though, and my computer with Visual Studio is in the basement, and I'm too tired to take the 2 minutes it would take to determine this.

Update: OK, it took more than 2 minutes (more like 20) to test this, but it looks like Math.Sin is at least twice as fast as a lookup table (using a Dictionary). Here's the class that does Sin using Math.Sin or a lookup table:

public class SinBuddy
{
    private Dictionary<double, double> _cachedSins
        = new Dictionary<double, double>();
    private const double _cacheStep = 0.01;
    private double _factor = Math.PI / 180.0;

    public SinBuddy()
    {
        for (double angleDegrees = 0; angleDegrees <= 360.0; 
            angleDegrees += _cacheStep)
        {
            double angleRadians = angleDegrees * _factor;
            _cachedSins.Add(angleDegrees, Math.Sin(angleRadians));
        }
    }

    public double CacheStep
    {
        get
        {
            return _cacheStep;
        }
    }

    public double SinLookup(double angleDegrees)
    {
        double value;
        if (_cachedSins.TryGetValue(angleDegrees, out value))
        {
            return value;
        }
        else
        {
            throw new ArgumentException(
                String.Format("No cached Sin value for {0} degrees",
                angleDegrees));
        }
    }

    public double Sin(double angleDegrees)
    {
        double angleRadians = angleDegrees * _factor;
        return Math.Sin(angleRadians);
    }
}

And here's the test/timing code:

SinBuddy buddy = new SinBuddy();

System.Diagnostics.Stopwatch timer = new System.Diagnostics.Stopwatch();
int loops = 200;

// Math.Sin
timer.Start();
for (int i = 0; i < loops; i++)
{
    for (double angleDegrees = 0; angleDegrees <= 360.0; 
        angleDegrees += buddy.CacheStep)
    {
        double d = buddy.Sin(angleDegrees);
    }
}
timer.Stop();
MessageBox.Show(timer.ElapsedMilliseconds.ToString());

// lookup
timer.Start();
for (int i = 0; i < loops; i++)
{
    for (double angleDegrees = 0; angleDegrees <= 360.0;
        angleDegrees += buddy.CacheStep)
    {
        double d = buddy.SinLookup(angleDegrees);
    }
}
timer.Stop();
MessageBox.Show(timer.ElapsedMilliseconds.ToString());

Using a step value of 0.01 degrees and looping through the full range of values 200 times (as in this code) takes about 1.4 seconds using Math.Sin, and about 3.2 seconds using a Dictionary lookup table. Lowering the step value to 0.001 or 0.0001 makes the lookup perform even worse against Math.Sin. Also, this result is even more in favor of using Math.Sin, since SinBuddy.Sin does a multiplication to turn the angle in degrees into the angle in radians on every call, while SinBuddy.SinLookup just does a straight lookup.

This is on a cheap laptop (no dual cores or anything fancy). Robert, you da man! (But I still think I should get the check, coz I did the work).

Update 2: OK, I am thoroughly retarded. It turns out stopping and restarting the Stopwatch doesn't reset the elapsed milliseconds, so the lookup only seemed half as fast because it's time was including the time for the Math.Sin calls. Also, I reread the question and realized you were talking about caching the values in a simple array, rather than using a Dictionary. Here is my modified code (I'm leaving the old code up as a warning to future generations):

public class SinBuddy
{
    private Dictionary<double, double> _cachedSins
        = new Dictionary<double, double>();
    private const double _cacheStep = 0.01;
    private double _factor = Math.PI / 180.0;

    private double[] _arrayedSins;

    public SinBuddy()
    {
        // set up dictionary
        for (double angleDegrees = 0; angleDegrees <= 360.0; 
            angleDegrees += _cacheStep)
        {
            double angleRadians = angleDegrees * _factor;
            _cachedSins.Add(angleDegrees, Math.Sin(angleRadians));
        }

        // set up array
        int elements = (int)(360.0 / _cacheStep) + 1;
        _arrayedSins = new double[elements];
        int i = 0;
        for (double angleDegrees = 0; angleDegrees <= 360.0;
            angleDegrees += _cacheStep)
        {
            double angleRadians = angleDegrees * _factor;
            //_cachedSins.Add(angleDegrees, Math.Sin(angleRadians));
            _arrayedSins[i] = Math.Sin(angleRadians);
            i++;
        }
    }

    public double CacheStep
    {
        get
        {
            return _cacheStep;
        }
    }

    public double SinArrayed(double angleDegrees)
    {
        int index = (int)(angleDegrees / _cacheStep);
        return _arrayedSins[index];
    }

    public double SinLookup(double angleDegrees)
    {
        double value;
        if (_cachedSins.TryGetValue(angleDegrees, out value))
        {
            return value;
        }
        else
        {
            throw new ArgumentException(
                String.Format("No cached Sin value for {0} degrees",
                angleDegrees));
        }
    }

    public double Sin(double angleDegrees)
    {
        double angleRadians = angleDegrees * _factor;
        return Math.Sin(angleRadians);
    }
}

And the test/timing code:

SinBuddy buddy = new SinBuddy();

System.Diagnostics.Stopwatch timer = new System.Diagnostics.Stopwatch();
int loops = 200;

// Math.Sin
timer.Start();
for (int i = 0; i < loops; i++)
{
    for (double angleDegrees = 0; angleDegrees <= 360.0; 
        angleDegrees += buddy.CacheStep)
    {
        double d = buddy.Sin(angleDegrees);
    }
}
timer.Stop();
MessageBox.Show(timer.ElapsedMilliseconds.ToString());

// lookup
timer = new System.Diagnostics.Stopwatch();
timer.Start();
for (int i = 0; i < loops; i++)
{
    for (double angleDegrees = 0; angleDegrees <= 360.0;
        angleDegrees += buddy.CacheStep)
    {
        double d = buddy.SinLookup(angleDegrees);
    }
}
timer.Stop();
MessageBox.Show(timer.ElapsedMilliseconds.ToString());

// arrayed
timer = new System.Diagnostics.Stopwatch();
timer.Start();
for (int i = 0; i < loops; i++)
{
    for (double angleDegrees = 0; angleDegrees <= 360.0;
        angleDegrees += buddy.CacheStep)
    {
        double d = buddy.SinArrayed(angleDegrees);
    }
}
timer.Stop();
MessageBox.Show(timer.ElapsedMilliseconds.ToString());

These results are quite different. Using Math.Sin takes about 850 milliseconds, the Dictionary lookup table takes about 1300 milliseconds, and the array-based lookup table takes about 600 milliseconds. So it appears that a (properly-written [gulp]) lookup table is actually a bit faster than using Math.Sin, but not by much.

Please verify these results yourself, since I have already demonstrated my incompetence.

MusiGenesis
c'mon quit being lazy... i'm trying to be lazy here...
mson
It's not just laziness - the cat's litter box down there is full, too. Although I guess that's just *more* laziness on my part.
MusiGenesis
lol - that's what you get a wife for... either she'll do it for you or she'll nag you into doing it...
mson
It's hard to beat a LUT. That one is not very demanding, though, as it doesn't even do a linear interpolation, as many sine lookups do.
Nosredna
@Nosredna: I think you should be making fun of my stopwatch code, not my lookup table. :)
MusiGenesis
I couldn't decide. It was a target-rich environment. :-) I do commend you, however, for actually writing code rather than spouting an opinion. Anyone who does a lot of optimization knows that you should never assert what will be faster until you code it, (except for betting purposes, of course).
Nosredna
..."but not by much." Um. the call to sin takes 42% longer than the LUT. I would have said, "by a large margin."
Nosredna
this should be the selected answer.
San Jacinto
The implementation of SinArrayed is inefficient. Rather than dividing by .01 you should multiply by 100. Division by .01 is not exactly the same multiplication by 100. Hence a compiler can't optimize this.
Accipitridae
Robert C Cartaino is definitely right about the optimizations in modern processors. I just ran this same test in Windows Mobile on my Samsung i760 smartphone (using 2 loops instead of 200), and got these results: Math.Sin - 1800 ms, lookup with Dictionary - 750 ms, lookup from Array - 280 ms. So on a crude processor, lookup is massively better than Math.Sin.
MusiGenesis
@Accipitridae: well it *is* a target-rich environment, as Nosredna said. :) I just ran this with multiplying by 100 instead of dividing by .01, and it was a little bit faster (maybe 10%).
MusiGenesis
@Nosredna: in my original answer, I was going to guess that Math.Sin would take 40 times longer than the lookup, which is why I ended up saying that the lookup was not faster by much. Also I only ran it on my one computer, so I didn't want to claim that the lookup was universally significantly faster.
MusiGenesis
Ok, I did run your benchmarks, but without all the chaff: no separate class to encapsulate the lookup table, not method calls, no conversion from double to int etc. The benchmark with table lookups then takes 15ms and the benchmark computing sines takes 415ms (on a 3.0 Ghz Pentium III). So the problem with your benchmark is that it measures a lot of overhead. The problem with my benchmark (as others already pointed out) is that during the whole benchmark the lookup table sits nicely in the cache. Both benchmarks are too simple.
Accipitridae
@Accipitridae, good work. That's exactly why if you need the speed, you write the code twice and test it _in situ_. Benchmarks can only give a hint. My experience in real-world dsp apps (where my code is a dll running in a host that reports the %time I'm using up) is library sin() is not usable, but it's always worth testing to be sure.
Nosredna
@Accipitridae: can you post your benchmark code? I doubt very much whether "chaff" like class encapsulation and method calls would produce an order-of-magnitude difference like what you've reported, and in any event these factors and the double-to-index conversion would be par for the course for any drop-in replacement for Math.Sin.
MusiGenesis
@MusiGnesis. If you're doing a very small operation, the overhead of a method call can be outrageous.
Nosredna
@Nosredna: yes, but not 30X outrageous. I just did a quick comparison of multiplying a double by 2.0 a million times inline vs. multiplying by 2.0 in a separate class, and it's about 3X faster inline. I suspect the rest of Accipitridae's difference is due to removing the conversion from double to int, but I'm not sure how you would write a replacement for Math.Sin that *doesn't* take a double for the angle. That's why I'd like to see his code.
MusiGenesis
@Nosredna: I think benchmarking should generally be treated like science - you should publish not just your conclusions, but also your data so that others can verify independently.
MusiGenesis
Agree. A benchmark is a starting place. In C there are lots of clever ways to go from float or double to int--I don't know C# deeply enough to know the tricks in it.
Nosredna
I looked at some C# resources. C# allows you to drop into unmanaged C and assembly language, so it seems to me that when it's necessary to speed up an FFT or DFT, there's no reason that C# should be slower than C or assembly.
Nosredna
@MusiGenesis: I'm measuring loops like: double sum=0; for(int a=0; a<bound; ++a) { sum += LUT[a]; }. In particular, I'm assuming that the angle a already is an integer. Whether that is the case depends on the application. E.g., when doing FFTs then all angles are of the form k*2Pi/2**n, where k and n are integers. If n is a known constant then it is quite natural to store an angle simply by the integer k. So far I've always used this kind of representation for angles when I worked with FFTs and lookup tables and therefore I forgot to mention this essential assumption. Sorry for the confusion.
Accipitridae
In that case, the LUT is a no-brainer, no?
Nosredna
Good question. The size of the LUT is important. E.g. stephentyrone has a nice answer with some details. I don't know if a cache miss or a sine are more expensive. I ran a small benchmark, which didn't show clear results (i.e. 150 cycles for a cache miss, 180 for a sine on my machine but other CPUs will give different results and the type of memory plays a role too).
Accipitridae
@Accipitridae: I just took a look at my own FFT routine (which was borrowed code, and I never drilled into it except to change everything from double to float) and it turns out it generates lookup tables on the fly (using Math.Sin and Math.Cos) for every transformation, so I'm getting hit with the cost of *both* methods every time (along with the cost of creating the arrays). I've put off optimizing this routine so far, since it's been acceptably fast as is, but your comments have pointed me right at the biggest performance hit. Thank you.
MusiGenesis
@Accipitridae, for the purpose of a FFT or DFT (rather than an oscillator, where the ear can hear tiny errors), I was assuming the integer range would be a reasonable size, but I guess it can be pushed up as far as you want.
Nosredna
Accipitridae
holy cow dude! thanks for all the work you've put in.
mson
also dude - i don't know if you're answer is right or wrong, but i'm sending you a gift card (it will be anonymous and will take a while because i'm up to my eyeballs in work...)
mson
@mson: thanks for the check. As far as right or wrong, I'd go with what Accipitridae and Nosredna have to say on the subject. I got a hell of a lot out of this excercise myself, since I do FFT in my software synthesizer and this has shown me how to get past a major bottleneck in my code.
MusiGenesis
@mson: also, I know Nosredna from outside SO, and you shoudn't be fooled by his avatar - he is *not* a beautiful black woman (I think that's a picture of Diana Ross, but I'm not sure).
MusiGenesis
@Nosredna: I just noticed that you claim to have been married wearing Mick Fleetwood's pants, which is a weird coincidence since I graduated from college (Antioch '89) with a guy who claimed to be wearing Mick Fleetwood's pants. :)
MusiGenesis
@mson: I hope the gift card is virtual, by the way. I just checked my website, and I realized that my mailing address on the Contact page is about 5 years out of date. That's what you get from a website that says "© 2005".
MusiGenesis
+2  A: 

The answer to this depends entirely on how many values are in your lookup table. You say "the domain is between 0.01 and 360.01", but you don't say how many values in that range might be used, or how accurate you need the answers to be. Forgive me for not expecting to see significant digits used to convey implicit meaning in a non-scientific context.

More information is still needed to answer this question. What is the expected distribution of values between 0.01 and 360.01? Are you processing a lot of data other than the simple sin( ) computation?

36000 double precision values takes over 256k in memory; the lookup table is too large to fit in L1 cache on most machines; if you're running straight through the table, you'll miss L1 once per sizeof(cacheline)/sizeof(double) accesses, and probably hit L2. If, on the other hand, your table accesses are more or less random, you will be missing L1 almost every time you do a lookup.

It also depends a lot on the math library of the platform that you're on. Common i386 implementations of the sin function, for example, range from ~40 cycles up to 400 cycles or even more, depending on your exact microarchitecture and library vendor. I haven't timed the Microsoft library, so I don't know exactly where the C# Math.sin implementation would fall.

Since loads from L2 are generally faster than 40 cycles on a sane platform, one reasonably expects the lookup table to be faster considered in isolation. However, I doubt you're computing sin( ) in isolation; if your arguments to sin( ) jump all over the table, you will be blowing other data needed for other steps of your computation out of the cache; although the sin( ) computation gets faster, the slowdown to other parts of your computation may more than outweigh the speedup. Only careful measurement can really answer this question.

Am I to understand from your other comments that you're doing this as part of a FFT computation? Is there a reason that you need to roll your own FFT instead of using one of the numerous extremely high quality implementations that already exist?

Stephen Canon
here is a link about significant digits... http://en.wikipedia.org/wiki/Significant_figures
mson
I also do not interpret significant figures to have any significance to the question. In programming contexts, unless otherwise specified, the precision of a number is determined by its type.
recursive
+1  A: 

Since you mention Fourier transforms as an application, you might also consider to compute your sines/cosines using the equations

sin(x+y) = sin(x)cos(y) + cos(x)sin(y)

cos(x+y) = cos(x)cos(y) - sin(x)sin(y)

I.e. you can compute sin(n * x), cos(n * x) for n = 0, 1, 2 ... iteratively from sin((n-1) * x), cos((n-1) * x) and the constants sin(x), cos(x) with 4 multiplications. Of course that only works if you have to evaluate sin(x), cos(x) on an arithmetic sequence.

Comparing the approaches without the actual implementation is difficult. It depends a lot on how well your tables fit into the caches.

Accipitridae
I've see this approach used in oscillators. It's a good one.
Nosredna
I tried this once for an FFT implementation, which is one application the OP mentions. I still used tables in the end because the result needed not to be precise and hence a small table was enough.
Accipitridae
Synth that uses phasor rotation: http://www.tutututututu.de/synths/dsfsynthesis/dsfsynthesis.html
Nosredna
Nice. Thanks a lot.
Accipitridae
A: 

As you may have thousands of values in your lookup table, what you may want to do is have a dictionary, and when you calculate a value, put it in the dictionary, so you only calculate each value one time, and use the C# function to do the calculation.

But, there is no reason to recalculate the same value over and over.

James Black
You have to be careful with that. In some cases a dictionary lookup could be slower than the sin calculation.
Nosredna
The only way to know is by profiling to see where it starts to be a problem. For example, if you are using WindowsCE then you may find the sin calculation to be much slower, but there is no one solution for all hardware.
James Black
A dictionary could, on some systems, beat a library sin(), but it's hard to imagine it beating an array unless the array is implemented as a dictionary. Agreed that you must implement and time to be sure.
Nosredna
@Nosrenda, in reference to your first comment on this answer: you also have to be careful because your sine calculator could be slower than a hash function... what was the point in commenting that?
San Jacinto
My point is that you can't make the assumption that avoiding calculation with a dictionary will speed it up. You have to try it and test it. Saying, "there's no reason to recalculate the same value over and over" is not necessarily true, because sometimes it's faster to do exactly that.
Nosredna
If you do go down the dictionary route, it's best to preload it with all expected input values and not let any other inputs be used. If you allow arbitrary float, double, or even long values to be your keys, you can fill all available memory (or hit whatever maximum number of key/value pairs your compiler allows).
Nosredna
Why preload. If you just clear the dictionary at the beginning of the series, then you will only calculate once and then just lookup, but you won't know what you have. Otherwise you end up preloading, then approximating, which may be close enough, but is some extra calculations.
James Black
Preloading would be for realtime applications. You want to do anything slow in init. But the real concern here is making sure all your inputs are known so you don't fill up your memory with a dictionary. In the case of some known number of integer values going in, you're safe. But in that case why not use an array, which is a direct address rather than a hash?
Nosredna
If all of your equations are between 0 and pi/2 then you have a great deal that is preloaded, but not used. How to preload or load a dictionary would depend on what you know before you start the calculations.
James Black