tags:

views:

1476

answers:

2

I'd like to build a synthesizer for the iPhone. I understand that it's possible to use custom audio units for the iPhone. At first glance, this sounds promising, since there's lots and lots of Audio Unit programming resources available. However, using custom audio units on the iPhone seems a bit tricky ( see: http://lists.apple.com/archives/Coreaudio-api/2008/Nov/msg00262.html)

This seems like the sort of thing that loads of people must be doing, but a simple google search for "iphone audio synthesis" doesn't turn up anything along the lines of a nice and easy tutorial or recommended tool kit.

So, anyone here have experience synthesizing sound on the iPhone? Are custom audio units the way to go, or is there another, simpler approach I should consider?

+3  A: 

Hi, I'm also investigating this. I think the AudioQueue API is probably the way to go.

Here's as far as I got, seems to work okay.

File: BleepMachine.h

//
//  BleepMachine.h
//  WgHeroPrototype
//
//  Created by Andy Buchanan on 05/01/2010.
//  Copyright 2010 Andy Buchanan. All rights reserved.
//

#include <AudioToolbox/AudioToolbox.h>

// Class to implement sound playback using the AudioQueue API's
// Currently just supports playing two sine wave tones, one per
// stereo channel. The sound data is liitle-endian signed 16-bit @ 44.1KHz
//
class BleepMachine
{
    static void staticQueueCallback( void* userData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer )
    {
        BleepMachine* pThis = reinterpret_cast<BleepMachine*> ( userData );
        pThis->queueCallback( outAQ, outBuffer );
    }
    void queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer );

    AudioStreamBasicDescription m_outFormat;

    AudioQueueRef m_outAQ;

    enum 
    {
        kBufferSizeInFrames = 512,
        kNumBuffers = 4,
        kSampleRate = 44100,
    };

    AudioQueueBufferRef m_buffers[kNumBuffers];

    bool m_isInitialised;

    struct Wave 
    {
        Wave(): volume(1.f), phase(0.f), frequency(0.f), fStep(0.f) {}
        float   volume;
        float   phase;
        float   frequency;
        float   fStep;
    };

    enum 
    {
        kLeftWave = 0,
        kRightWave = 1,
        kNumWaves,
    };

    Wave m_waves[kNumWaves];

public:
    BleepMachine();
    ~BleepMachine();

    bool Initialise();
    void Shutdown();

    bool Start();
    bool Stop();

    bool SetWave( int id, float frequency, float volume );
};

// Notes by name. Integer value is number of semitones above A.
enum Note
{
    A       = 0,
    Asharp,
    B,
    C,
    Csharp,
    D,
    Dsharp,
    E,
    F,
    Fsharp,
    G,
    Gsharp,

    Bflat = Asharp,
    Dflat = Csharp,
    Eflat = Dsharp,
    Gflat = Fsharp,
    Aflat = Gsharp,
};

// Helper function calculates fundamental frequency for a given note
float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave=4 );
float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber );

File:BleepMachine.mm

 //
//  BleepMachine.mm
//  WgHeroPrototype
//
//  Created by Andy Buchanan on 05/01/2010.
//  Copyright 2010 Andy Buchanan. All rights reserved.
//

#include "BleepMachine.h"

void BleepMachine::queueCallback( AudioQueueRef outAQ, AudioQueueBufferRef outBuffer )
{
    // Render the wave

    // AudioQueueBufferRef is considered "opaque", but it's a reference to
    // an AudioQueueBuffer which is not. 
    // All the samples manipulate this, so I'm not quite sure what they mean by opaque
    // saying....
    SInt16* coreAudioBuffer = (SInt16*)outBuffer->mAudioData;

    // Specify how many bytes we're providing
    outBuffer->mAudioDataByteSize = kBufferSizeInFrames * m_outFormat.mBytesPerFrame;

    // Generate the sine waves to Signed 16-Bit Stero interleaved ( Little Endian )
    float volumeL = m_waves[kLeftWave].volume;
    float volumeR = m_waves[kRightWave].volume;
    float phaseL = m_waves[kLeftWave].phase;
    float phaseR = m_waves[kRightWave].phase;
    float fStepL = m_waves[kLeftWave].fStep;
    float fStepR = m_waves[kRightWave].fStep;

    for( int s=0; s<kBufferSizeInFrames*2; s+=2 )
    {
        float sampleL = ( volumeL * sinf( phaseL ) );
        float sampleR = ( volumeR * sinf( phaseR ) );

        short sampleIL = (int)(sampleL * 32767.0);
        short sampleIR = (int)(sampleR * 32767.0);

        coreAudioBuffer[s] =   sampleIL;
        coreAudioBuffer[s+1] = sampleIR;

        phaseL += fStepL;
        phaseR += fStepR;
    }

    m_waves[kLeftWave].phase = fmodf( phaseL, 2 * M_PI );   // Take modulus to preserve precision
    m_waves[kRightWave].phase = fmodf( phaseR, 2 * M_PI );

    // Enqueue the buffer
    AudioQueueEnqueueBuffer( m_outAQ, outBuffer, 0, NULL ); 
}

bool BleepMachine::SetWave( int id, float frequency, float volume )
{
    if ( ( id < kLeftWave ) || ( id >= kNumWaves ) ) return false;

    Wave& wave = m_waves[ id ];

    wave.volume = volume;
    wave.frequency = frequency;
    wave.fStep = 2 * M_PI * frequency / kSampleRate;

    return true;
}

bool BleepMachine::Initialise()
{
    m_outFormat.mSampleRate = kSampleRate;
    m_outFormat.mFormatID = kAudioFormatLinearPCM;
    m_outFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    m_outFormat.mFramesPerPacket = 1;
    m_outFormat.mChannelsPerFrame = 2;
    m_outFormat.mBytesPerPacket = m_outFormat.mBytesPerFrame = sizeof(UInt16) * 2;
    m_outFormat.mBitsPerChannel = 16;
    m_outFormat.mReserved = 0;

    OSStatus result = AudioQueueNewOutput(
                                          &m_outFormat,
                                          BleepMachine::staticQueueCallback,
                                          this,
                                          NULL,
                                          NULL,
                                          0,
                                          &m_outAQ
                                          );

    if ( result < 0 )
    {
        printf( "ERROR: %d\n", (int)result );
        return false;
    }

    // Allocate buffers for the audio
    UInt32 bufferSizeBytes = kBufferSizeInFrames * m_outFormat.mBytesPerFrame;

    for ( int buf=0; buf<kNumBuffers; buf++ ) 
    {
        OSStatus result = AudioQueueAllocateBuffer( m_outAQ, bufferSizeBytes, &m_buffers[ buf ] );
        if ( result )
        {
            printf( "ERROR: %d\n", (int)result );
            return false;
        }

        // Prime the buffers
        queueCallback( m_outAQ, m_buffers[ buf ] );
    }

    m_isInitialised = true;
    return true;
}

void BleepMachine::Shutdown()
{
    Stop();

    if ( m_outAQ )
    {
        // AudioQueueDispose also chucks any audio buffers it has
        AudioQueueDispose( m_outAQ, true );
    }

    m_isInitialised = false;
}

BleepMachine::BleepMachine()
: m_isInitialised(false), m_outAQ(0)
{
    for ( int buf=0; buf<kNumBuffers; buf++ ) 
    {
        m_buffers[ buf ] = NULL;
    }
}

BleepMachine::~BleepMachine()
{
    Shutdown();
}

bool BleepMachine::Start()
{
    OSStatus result = AudioQueueSetParameter( m_outAQ, kAudioQueueParam_Volume, 1.0 );
    if ( result ) printf( "ERROR: %d\n", (int)result );

    // Start the queue
    result = AudioQueueStart( m_outAQ, NULL );
    if ( result ) printf( "ERROR: %d\n", (int)result );

    return true;
}

bool BleepMachine::Stop()
{
    OSStatus result = AudioQueueStop( m_outAQ, true );
    if ( result ) printf( "ERROR: %d\n", (int)result );

    return true;
}

// A    (A4=440)
// A#   f(n)=2^(n/12) * r
// B    where n = number of semitones
// C    and r is the root frequency e.g. 440
// C#
// D    frq -> MIDI note number
// D#   p = 69 + 12 x log2(f/440)
// E
// F    
// F#
// G
// G#
//
// MIDI Note ref: http://www.phys.unsw.edu.au/jw/notes.html
//
// MIDI Node numbers:
// A3   57
// A#3  58
// B3   59
// C4   60 <--
// C#4  61
// D4   62
// D#4  63
// E4   64
// F4   65
// F#4  66
// G4   67
// G#4  68
// A4   69 <--
// A#4  70
// B4   71
// C5   72

float CalculateFrequencyFromNote( SInt32 semiTones, SInt32 octave )
{
    semiTones += ( 12 * (octave-4) );
    float root = 440.f;
    float fn = powf( 2.f, (float)semiTones/12.f ) * root;
    return fn;
}

float CalculateFrequencyFromMIDINote( SInt32 midiNoteNumber )
{
    SInt32 semiTones = midiNoteNumber - 69;
    return CalculateFrequencyFromNote( semiTones, 4 );
}

//for ( SInt32 midiNote=21; midiNote<=108; ++midiNote )
//{
//  printf( "MIDI Note %d: %f Hz \n",(int)midiNote,CalculateFrequencyFromMIDINote( midiNote ) );
//}

Update: Basic usage info

  1. Initialise. Somehere near the start, I'm using initFromNib: in my code

    m_bleepMachine = new BleepMachine; m_bleepMachine->Initialise(); m_bleepMachine->Start();

  2. Now the sound playback is running, but generating silence.

  3. In your code, call this when you want to change the tone generation

    m_bleepMachine->SetWave( ch, frq, vol );

    where ch is the channel ( 0 or 1 ) where frq is the frequency to set in Hz where vol is the volume ( 0=-Inf db, 1=-0db )

  4. At program termination

    delete m_bleepMachine;

Andy J Buchanan
Thanks Andy. Much appreciated. I'm hoping I can find a toolkit that will handle some of this lower-level stuff for me. I'm coming from SuperCollider, and really like just being able to plug unit generators in to one another. But seeing an example of a lower-level implementation is great. Quick newbie question -- what's the .mm extension? Also, if you feel like pasting in an example usage of your BleepMachine class, I'd be super greatful.
morgancodes
.mm extension = objective-c++
Andy J Buchanan
Added basic usage info to answer.
Andy J Buchanan
Thanks for the details Andy!
morgancodes
A: 

PD has a version that runs on the iphone, used by RjDj. If you are OK with using someone else's app rather than writing your own, you can do quite a bit in an RjDj scene, and there is a set of objects that let you patch it out and test it on a regular PD on your own computer.

I should mention: PD is a visual dataflow programming language, that is to say, it is turing complete, and can be used to develop graphical applications - but if you are going to do anything interesting I would definitely look into best practices for patching.

Justin Smith
Thanks Justin,I definitely want to be able develop my own app and not be tied to RJDJ. I know SuperCollider is iPhone-able, but I think the SC license precludes using it in any sort of commercial software, so that's a bit of a turn-off.
morgancodes
I am actually a contributer to supercollider, and it has no such restriction. It is GPL, which means that if you modify the supercollider source, you need to share those modifications with anyone who gets the app (just as gcc is GPL, so any modifications you make to gcc to make your app run need to be shared), but if your code is using supercollider but not modifying supercollider itself there is no such restriction at all. That said, I am pretty sure you can only use the iphone version of SC on jailbroken phones at the moment, which is why I did not suggest it.
Justin Smith
If I recall correctly, the reason that sc is not availible through the app store is that it is a full fledged interpreter. By these standards RjDj should be excluded as well (PD is an unusual platform for programming, but it is a programming environment), but as far as I am concerned neither should be.
Justin Smith