views:

2374

answers:

3

I'd like to play a synthesised sound in an iPhone. Instead of using a pre-recorded sound and using SystemSoundID to play an existing binary, I'd like to synthesise it. Partially, that's because I want to be able to play the sound continuously (e.g. when the user's finger is on the screen) instead of a one-off sound sample.

If I wanted to synthesise a Middle A+1 (A4) (440Hz), I can calculate a sine wave using sin(); what I don't know is how to arrange those bits into a packet which CoreAudio can then play. Most of the tutorials that exist on the net are concerned with simply playing existing binaries.

Can anyone help me with a simple synthesised sine sound wave at 440Hz?

+1  A: 

Many of the audio technologies allow for data to be passed in instead of a sound file. AVAudioPlayer, for example, has:

-initWithData:error:
Initializes and returns an audio player for playing a designated memory buffer.

- (id)initWithData:(NSData *)data error:(NSError **)outError

However, I am not sure how you would pass in a data ptr, start the sound, and then keeping it looping by passing in other data ptrs, or repeating the same, etc.

mahboudz
+8  A: 

What you want to do it probably to setup an AudioQueue. It allows you to fill a buffer with synthesized audio data in a callback. You would setup the AudeioQueue to run in a new thread as such:

#define BUFFER_SIZE 16384
#define BUFFER_COUNT 3
static AudioQueueRef audioQueue;
void SetupAudioQueue() {
    OSStatus err = noErr;
    // Setup the audio device.
    AudioStreamBasicDescription deviceFormat;
    deviceFormat.mSampleRate = 44100;
    deviceFormat.mFormatID = kAudioFormatLinearPCM;
    deviceFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger;
    deviceFormat.mBytesPerPacket = 4;
    deviceFormat.mFramesPerPacket = 1;
    deviceFormat.mBytesPerFrame = 4;
    deviceFormat.mChannelsPerFrame = 2;
    deviceFormat.mBitsPerChannel = 16;
    deviceFormat.mReserved = 0;
    // Create a new output AudioQueue for the device.
    err = AudioQueueNewOutput(&deviceFormat, AudioQueueCallback, NULL,
                              CFRunLoopGetCurrent(), kCFRunLoopCommonModes,
                              0, &audioQueue);
    // Allocate buffers for the AudioQueue, and pre-fill them.
    for (int i = 0; i < BUFFER_COUNT; ++i) {
        AudioQueueBufferRef mBuffer;
        err = AudioQueueAllocateBuffer(audioQueue, BUFFER_SIZE, mBuffer);
        if (err != noErr) break;
        AudioQueueCallback(NULL, audioQueue, mBuffer);
    }
    if (err == noErr) err = AudioQueueStart(audioQueue, NULL);
    if (err == noErr) CFRunLoopRun();
  }

You callback method AudioQueueCallback will then be called whenever the AudioQueue needs more data. Implement with something like:

void AudioQueueCallback(void* inUserData, AudioQueueRef inAQ,
                        AudioQueueBufferRef inBuffer) {
    void* pBuffer = inBuffer->mAudioData;
    UInt32 bytes = inBuffer->mAudioDataBytesCapacity;
    // Write max <bytes> bytes of audio to <pBuffer>
    outBuffer->mAudioDataByteSize = actualNumberOfBytesWritten
    err = AudioQueueEnqueueBuffer(audioQueue, inBuffer, 0, NULL);
}
PeyloW
This isn't correct. You shouldn't be calling AudioQueueCallback in the allocation loop. I don't believe the description is setup correct, either.Plus, you should be calling AudioQueueStart(audioQueue, nil) instead of this odd way. Look at the AudioUnit Framework instead.
thefaj
@thefaj: I believe you are the one who is incorrect. This example is taken from my app SC68 Player (http://itunes.apple.com/se/app/sc68-player/id295290413?mt=8), where I have originally taken the code for audio replay from Apple's example iPhone app SpeakHere (http://developer.apple.com/iphone/library/samplecode/SpeakHere/), look at the AQPlayer.mm file.Full source code for SC68 Player is available (http://www.peylow.se/sc68player.html).
PeyloW
Your example is missing AudioQueueStart() which is how the AudioQueueCallback should be called.
thefaj
@thefaj: Ah, true thanks. I have updated the example.
PeyloW
You forgot to check the error return of `AudioQueueNewOutput`.
Peter Hosey
Gods... I am regretting that I took actual code and tried to downsize it for an answer... I should probably just written some pseudocode and added a link to Apple's docs.
PeyloW
I think it's useful to see code get corrected. I'm not saying be sloppy but perfect code can sometimes hide the complexity of actually using some framework. This way we get to see how things could go wrong and how to fix them.
willc2
A: 

This code works right away. It uses the AudioUnit Framework.

Davide Vosti