views:

1187

answers:

5

I want to put together a SDR system that tunes initially AM, later FM etc. The system I am planning to use to do this will have a sine lookup table for Direct Digital Synthesis (DDS). In order to tune properly I expect to need to be able to precisely control the frequency of the sine wave fed to the Mixer (multiplier in this case). I expect that linear interpolation will be close, but think a non-linear method will provide better results.

What is a good and fast interpolation method to use for sine tables. Multiplication and addition are cheap on the target system; division is costly.

Edit: I am planning on implementing constants with multiply/shift functions to normalize the constants to scaled integers. Intermediate values will use wide adds, and multiplies will use 18 or 17 bits. Floating point "pre-computation" can be used, but not on the target platform. When I say "division is costly" I mean that it has to implemented using the multipliers and a lot of code. It's not unthinkable, but should be avoided. However, true floating point IEEE methods would take a significant amount of resources on this platform, as well as a custom implementation.

Any SDR experiences would be helpful.

+3  A: 

Interpolation in a sine table is effectively resampling. Obviously you can get perfect results by a single call to sin, so whatever your solution is it needs to outperform that. For fixed-filter resampling, you're still going to only have a fixed set of available points (a 3:1 upsampler means you'll have 2 new points available between each point in your table). How expensive is memory on the target system? My primary recommendation is simply improve the table resolution and use linear interpolation. You'll get the same results as a smaller table and simple upsample but with less computational overhead.

280Z28
Unfortunately, I won't be about to call a "sin" function. I may write a C program on my PC to generate the sine table for me, but no C lib on the target. Memory isn't cheap, I can trade Logic for fast memory, but it's a limited system resource. I should be able to pipeline the math as long as it doesn't get too crazy.
NoMoreZealots
+6  A: 

If you don't get very good results with linear interpolation you can try the trigonometric relations.

Sum and Difference Formulas

sin(A+B)=sinA*cosB + cosA*sinB
sin(A-B)=sinA*cosB - cosA*sinB
cos(A+B)=cosA*cosB - sinA*sinB
cos(A-B)=cosA*cosB + sinA*sinB

and you can have precalculated sin and cos values for A, B ranges, ie

A range: 0, 10, 20, ... 90
B range: 0.01 ... 0.99
Nick D
That sounds like a decent idea. I have to think about it. I'm planning on using a FPGA platform and coding it VHDL, so I like the fact that this can work using only multipliers and full adders. I'll have to write some test code to see if it gets me where I want to be.
NoMoreZealots
Either the A range needs to be in steps of 1 degree or the B range needs to extend to 9.99 - or you have to repeat the operation many times to get to sine(27.5).
Jonathan Leffler
@Jonathan Leffler I think you're right, but I get the idea. With 18bit multipliers this one gets to the point that the zero crossing takes 5 table entries to go from 1 to -1, which is the steepest part of the curve, with 2 1k tables (effectively giving 20bit resolution on the lookup, which is quick using CLBs as Mini ROMs.). Mathmatic methods would require more multipiers and adders, and don't get me any closer in my resolution band for increased complexity.
NoMoreZealots
+4  A: 

table interpolation for smooth functions = ick hurl bleah. IMHO I would only use table interpolation on some really weird function, or where you absolutely needed to ensure you avoid discontinuities (note that the derivatives for interpolated tables are discontinuous though). By the time you finish doing table lookups and the required interpolation code, you could have already evaluated a polynomial or two, at least if multiplication doesn't cause you too much heartburn.

IMHO you're much better off using Chebyshev approximation for each segment (e.g. -90 to +90 degrees, or -45 to +45 degrees, and then other segments of the same width) of the sine waveform, and picking the minimum degree polynomial that reduces your error to a desired value. If the segment is small enough you could get away with a quadratic or maybe even a linear polynomial; there's tradeoffs between accuracy, and # of segments, and degree of polynomial.

See my post in this other question, it'll save you the trouble of calculating coefficients (at least if you believe my math).

(edit: in case this wasn't clear, you do the Chebyshev approximation at design-time on your favorite high-powered PC, so that at run-time you can use a dirtbag microcontroller or FPGA or whatever with a simple polynomial of degree 1-4. Don't go over degree 4 unless you know what you're doing, 3 or below would be better.)

Jason S
Actually I still don't quite get where the Precomputed PC/Embedded System breakpoint is with this algorythm. It appears it will cost 6 multipliers to do though. I'll have to compare Nick. D's answer's performance to this for 18 bit multipliers. I'll use a poor man's FP, scaled constants with a hardwired bitshift.
NoMoreZealots
+1  A: 

Have you considered using the Taylor series for the trig functions (found here)? This involves multiplication and division but depending on how your numbers are represented you may be able to turn the division into multiplication (or bit shifts if you're very lucky). You can compute as many terms of the series as you need and get your precision that way.

Alternately if this sine wave is going to be an analog signal at some point then you could just use a lookup table approach and use an analog filter to remove the sampling frequency from the resulting waveform. If your sampling frequency is 100 times the sine frequency it will be easy to remove. You'll need a variable filter to do this. I've never done such a thing but I know there's digital potentiometers that take a binary number and change their resistance. That could be the basis of a variable RC filter - probably with some op-amps for gain, etc.

Good luck!

Stephen Friederichs
Taylor gets there way too slowly, IMO. And filtering changes the phase.
Nosredna
Division is not unthinkable, I've thought of it myself, BUT it is VERY expensive on this platform and should be reserved for applications that can't done another way
NoMoreZealots
Taylor series don't use division... and in most cases should be eschewed in favor of Chebyshev approximations as they have guaranteed convergence and error that is intentionally evenly-distributed.
Jason S
+4  A: 

Why a table? This very fast function has its worst noise peak at -90db when the signal is at -20db. That's crazy good.

For resampling of audio, I always use one of the interpolators from the Elephant paper. This was discussed in a previous SO question.

If you're on a processor that doesn't have fp, you can still do these things, but they are harder. I've been there. I feel your pain. Good luck! I used to do conversions for fp to integer for fun, but now you'd have to pay me to do it. :-)


Cool online references that apply to your problem:

http://www.audiomulch.com/~rossb/code/sinusoids/

http://www.dattalo.com/technical/theory/sinewave.html


Edit: additional thoughts based on your comments

Since you're working on a tricky processor, maybe you should look into how to make your sine table have more angles to look up, but still keep it small.

Suppose you break a quadrant into 90 pieces (in reality, you'd probably use 256 pieces, but let's keep it 90 for familiarity and clarity). Encode those as 16 bits. That's 180 bytes of table so far.

Now, for every one of those degrees, we're going to have 9 (in reality probably 8 or 16) in-between points.

Let's take the range between 3 degrees and 4 degrees as an example.

sin(3)=0.052335956 //this will be in your table as a 16-bit number
sin(4)=0.069756474 //this will be in your table as a 16-bit number

so we're going to look at sin(3.1)

sin(3.1)=0.054978813 //we're going to be tricky and store the result
                     // in 8 bits as a percentage of the distance between
                     // sin(3) and sin(4)

What you want to do is figure out how sin(3.1) fits in between sin(3) and sin(4). If it's half way between, code that as a byte of 128. If it's a quarter of the way between, code that as 64.

That's an additional 90 bytes and you've encoded down to a tenth of a degree in 16-bit res in only 180+90*9 bytes. You can extend as needed (maybe going up to 32-bit angles and 16-bit tween angles) and linearly interpolate in between very quickly. To minimize storage space, you're taking advantage of the fact that consecutive values are close to each other.


Edit 2: better way to encode the in-between angles in a table

I just remembered that when I did this, I ended up very compactly expressing the difference between the expected value according to linear interpolation and the actual value. This error is always in the same direction.

I first calculated the maximum error in the range and then based the scale on that.

Worked great. I feel like I should do the code in a blog entry to illustrate. :-)

Nosredna
@Nosredna: I think you're missing something: those additional 90 bytes are not the same for each of the 90 pieces; you'd have to add 90bytes x 90pieces = 8100bytes for your scheme.
Jason S
(you could use angle sum/difference formulae as per Nick D's suggestion)
Jason S
Yes! I screwed that up. Fixing.
Nosredna
You're saving about half the table size, is my point.
Nosredna