views:

616

answers:

4

I wrote a small software synth for the iPhone.
To further tune performance I measured my application with Shark and found that I am loosing a lot of time in float/SInt16 conversions.
So I rewrote some parts to get around the conversions by pre-calculating lookup tables that return "ready-to-use" SInt16 samples. This works fine so far.
Currently I am trying to rewrite some filters and my ADSR envelope implementation to use only integer arithmetic but I could use some tipps how to perform multiplications/divisions without floats.
I am targetting the iPhone canonical format:

  • LPCM
  • 16-bit integer samples

What are good approaches to apply an amplitude to my final sample without using a float?

Edit:
The only thing I figured out so far is, that I can divide by powers of 2 by right-shifting my current sample.

inBuffer[frame] = wavetable[i % cycleLengthInSamples] >> 4;

But I can't think of any elegant way to create a smooth ADSR envelope with that.

Edit2: Thanks for all your great answers!
My current approach:

  • bring all my ADSR envelope values into the positive SInt16 range
  • multiply with the current value from the wavetable (store intermediates as SInt32)
  • shift the result by 16 to the right

this seems to work :)

+2  A: 

Fixed point is good, since in this case you're using 16 bits. The simplest way is to multiply by a power of 10 depending on the precision you need. If you can use 32 bit ints as an intermediate, you should be able to get decent precision. At the end you can convert back to a 16 bit int, rounding or truncating as you prefer.

Edit: You want shift to the left, to make the values bigger. Store the result of the shift in a type with more precision (32 or 64 bit depending on what you need). simple shifting won't work if you are using signed types

Watch out if you're multiplying or dividing two fixed point numbers. Multiplying winds up being (a*n) * (b*n) and you'll wind up with a*b*n^2 instead of a*b*n. Division is (a*n) / (b*n) which is (a/b) instead of ((a*n)/b). That's why I suggested using powers of 10, makes it easy to find your mistakes if you're not familiar with fixed point.

When you're done your calculations, you shift back to the right to get back to a 16 bit int. If you want to get fancy, you can also do rounding before you shift.

I suggest you do some reading if you're really interested in implementing efficient fixed point. http://www.digitalsignallabs.com/fp.pdf

patros
That's what I was thinking. But wouldn't you multiply by a power of two to get your fixed point? That only involves shifting bits.
Robert Harvey
Shifting can get funky when you're using signed ints. Powers of two are more efficient, but harder to debug. If it's the first time using fixed point, I'd recommend powers of ten.
patros
Thanks for your answers!I still haven't found an elegant way to implement a float-less ADSR envelope.I just tried right-shifting the samples which results in a division by an arbitrary power of two and therefore reduces my amplitude - but I can't figure out how to create a smooth envelope with that.
weichsel
+1  A: 

Have a look at this page that describes fast multiplication algorithms.

http://www.newton.dep.anl.gov/askasci/math99/math99199.htm

Robert Harvey
+2  A: 

The answers to this SO question are pretty comprehensive in terms of implementation. Here's a bit more of an explanation than I saw over there:

One approach is to force all your numbers into a range, say [-1.0,1.0). Then you map those numbers into the range [-2^15,(2^15)-1]. For instance,

Half = round(0.5*32768); //16384
Third = round((1.0/3.0)*32768); //10923

When you multiply these two numbers you get

Temp = Half*Third; //178962432
Result = Temp/32768; //5461 = round(1.0/6.0)*32768

Dividing by 32768 in the last line is the point Patros made about multiplies needing an extra scaling step. This makes more sense if you write the 2^N scaling explicitly:

x1 = x1Float*(2^15);
x2 = x2Float*(2^15);
Temp = x1Float*x2Float*(2^15)*(2^15);
Result = Temp/(2^15); //get back to 2^N scaling

So that's the arithmetic. For the implementation, note that the multiply of two 16-bit integers needs a 32-bit result, so Temp should be 32-bit. Also, 32768 isn't representable in a 16-bit variable, so be aware that the compiler will make 32-bit immediates. And as you've already noted, you can shift to multiply/divide by powers of 2 so you can write

N = 15;
SInt16 x1 = round(x1Float * (1 << N));
SInt16 x2 = round(x2Float * (1 << N));
SInt32 Temp = x1*x2;
Result = (SInt16)(Temp >> N);
FloatResult = ((double)Result)/(1 << N);

But suppose [-1,1) isn't the right range? If you'd rather limit your numbers to, say, [-4.0,4.0), you can use N = 13. Then you have 1 sign bit, two bits before the binary point, and 13 after. These are called 1.15 and 3.13 fixed point fractional types respectively. You trade precision in the fraction for headroom.

Adding and subtracting fractional types works fine as long as you look out for saturation. For divide, as Patros said, the scaling actually cancels out. So you have to do

Quotient = (x1/x2) << N;

or, to preserve precision

Quotient = (SInt16)(((SInt32)x1 << N)/x2); //x1 << N needs wide storage

Multiplying and dividing by whole numbers works normally. For instance, to divide by 6 you can simply write

Quotient = x1/6; //equivalent to x1Float*(2^15)/6, stays scaled

And in the case of dividing by a power of 2,

Quotient = x1 >> 3; //divides by 8, can't do x1 << -3 as Patros pointed out

Adding and subtracting whole numbers, though, doesn't work naively. You have to first see if the integer fits in your x.y type, make the equivalent fractional type, and proceed.

I hope this helps with the idea, look at the code in that other question for clean implementations.

mtrw
Thank you! Very comprehensive.
weichsel
+1  A: 

In general, say you will use a signed 16.16 fixed point representation. So that a 32bit integer will have a signed 16bit integer part and a 16bit fractional part. Then I don't know what language is used in iPhone development (Objective-C perhaps?), but this example is in C:

#include <stdint.h>

typedef fixed16q16_t int32_t ;
#define FIXED16Q16_SCALE 1 << 16 ;

fixed16q16_t mult16q16( fixed16q16_t a, fixed16q16_t b )
{
    return (a * b) / FIXED16Q16_SCALE ;
}

fixed16q16_t div16q16( fixed16q16_t a, fixed16q16_t b )
{
    return (a * FIXED16Q16_SCALE) / b ;
}

Note that the above is a simplistic implementation, and provides no protection from arithmetic overflow. For example in div16q16() I multiple before the divide to maintain precision, but depending on the operands the operation might overflow. You might use a 64 bit intermediate to overcome this. Also the divide always rounds down because it uses integer division. This gives best performance, but may affect the precision of iterative calculations. Fixes are simple but add to the overhead.

Note that when multiplying or dividing by a constant power-of-two, most compilers will spot the trivial optimisation and use a shift. However C does not define the behaviour for a right-shift of a negative signed integer, so I have left it to the compiler to work it out for safety and portability. YMV on whatever language you are using.

In an OO language, fixed16q16_t would naturally be a candidate for a class with operator overloading so you can use it like a normal arithmetic type.

You may find it useful to convert between types:

double fixed16q16_to_double( fixed16q16_t fix )
{
    return (double)fix / FIXED16Q16_SCALE ;
}

int fixed16q16_to_int( fixed16q16_t fix )
{
    // Note this rounds to nearest rather than truncates
    return ((fix + FIXED16Q16_SCALE/2)) / FIXED16Q16_SCALE ;
}

fixed16q16_t int_to_fixed16q16( int i )
{
    return i * FIXED16Q16_SCALE ;
}

fixed16q16_t double_to_fixed16q16( double d )
{
    return (int)(d * FIXED16Q16_SCALE) ;
}

That is the basics, it is possible to get more sophisticated and add trig and other math functions.

Fixed addition and subtraction works with the built-in + and - operators and their variants.

Clifford