views:

128

answers:

2

What are the correct ways of initializing (allocating memory) and releasing (freeing) an AudioBufferList with 3 AudioBuffers? (I'm aware that there might be more than one ways of doing this.)

I'd like to use those 3 buffers to read sequential parts of an audio file into them and play them back using Audio Units.

A: 

First of all, I think that you actually want 3 AudioBufferLists, not one AudioBufferList with 3 AudioBuffer members. An AudioBuffer represents a single channel of data, so if you have 3 stereo audio files, you should put them in 3 AudioBufferLists, with each list having 2 AudioBuffers, one buffer for the left channel and one for the right. Your code would then process each list (and its respective channel data) separately, and you could store the lists in an NSArray or something like that.

Technically, there's no reason you can't have a single buffer list with 3 interleaved audio channels (meaning that both the left & right channel are stored in a single buffer of data), but this goes against the conventional use of the API and will be a bit confusing.

Anyways, this part of the CoreAudio API is more C-ish than Objective-C-ish, so you'd use malloc/free instead of alloc/release. The code would look something like this:

AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList));
bufferList->mNumberBuffers = 2; // 2 for stereo, 1 for mono
for(int i = 0; i < 2; i++) {
  int numSamples = 123456; // Number of sample frames in the buffer
  bufferList->mBuffers[i].mNumberChannels = 1;
  bufferList->mBuffers[i].mDataByteSize = numSamples * sizeof(Float32);
  bufferList->mBuffers[i].mData = (Float32*)malloc(sizeof(Float32) * numSamples);
}

// Do stuff...

for(int i = 0; i < 2; i++) {
  free(bufferList->mBuffers[i].mData);
}
free(bufferList);

The above code is assuming that you are reading in the data as floating point. If you aren't doing any special processing on the files, it's more efficient to read them in as SInt16 (raw PCM data), as the iPhone doesn't have a FPU.

Also, if you aren't using the lists outside of a single method, then it makes more sense to allocate them on the stack instead of the heap by declaring it as a regular object, not a pointer. You still need to malloc() the actual mData member of the AudioBuffer, but at least you don't need to worry about free()'ing the actual AudioBufferList itself.

Nik Reiman
Thanks. So if I have interleaved audio channels I should create 3 separate AudioBufferLists with a single AudioBuffer in them, right? But then what's the point of using AudioBufferLists? If I understand what you say correctly, I would be better off with 3 AudioBuffers (and no AudioBufferLists) - in this case at least.
Tom Ilsinszki
Yes, that would be correct. The point of using AudioBufferLists is to make it easier to manage multi-channel data, since usually interlaced data is a real pain to do DSP operations on. It's much nicer to have separated stereo channels with each channel in its own buffer. Imagine working with a 4.1 stereo signal -- then you'd have a single buffer with 5 interlaced channels! Not too much fun to work with.
Nik Reiman
Ah, but I didn't say to not use AudioBufferLists entirely. Even though it may seem silly to pass a single AudioBuffer inside of an AudioBufferList, they are much easier to pass around within the CoreAudio API. Besides, the AudioBufferList struct itself doesn't impose much memory overhead.
Nik Reiman
This code could thrash memory. The allocated ABL only has enough space for a single AudioBuffer, but you are storing two. You need to increase the allocation size of the ABL by one AudioBuffer.
sbooth
+1  A: 

Here is how I do it:

AudioBufferList *
AllocateABL(UInt32 channelsPerFrame, UInt32 bytesPerFrame, bool interleaved, UInt32 capacityFrames)
{
    AudioBufferList *bufferList = NULL;

    UInt32 numBuffers = interleaved ? 1 : channelsPerFrame;
    UInt32 channelsPerBuffer = interleaved ? channelsPerFrame : 1;

    bufferList = static_cast<AudioBufferList *>(calloc(1, offsetof(AudioBufferList, mBuffers) + (sizeof(AudioBuffer) * numBuffers)));

    bufferList->mNumberBuffers = numBuffers;

    for(UInt32 bufferIndex = 0; bufferIndex < bufferList->mNumberBuffers; ++bufferIndex) {
        bufferList->mBuffers[bufferIndex].mData = static_cast<void *>(calloc(capacityFrames, bytesPerFrame));
        bufferList->mBuffers[bufferIndex].mDataByteSize = capacityFrames * bytesPerFrame;
        bufferList->mBuffers[bufferIndex].mNumberChannels = channelsPerBuffer;
    }

    return bufferList;
}
sbooth