tags:

views:

61

answers:

2

When recording from a microphone in CoreAudio, what is kAudioDevicePropertyBufferFrameSize for? The docs say it's "A UInt32 whose value indicates the number of frames in the IO buffers". However, this doesn't give any indication of why you would want to set it.

The kAudioDevicePropertyBufferFrameSizeRange property gives you a valid minimum and maximum for the bufferframe size. Does setting the bufferframe size to the max slow things down? When would you want to set it to something other than the default?

A: 

Usually you'd leave it at the default, but you might want to change the buffer size if you have an AudioUnit in the processing chain that expects or is optimized for a certain buffer size.

Also, generally, larger buffer sizes result in higher latency between recording and playback, while smaller buffer sizes increase the CPU load of each channel being recorded.

lucius
+1  A: 

Here's what they had to say on the CoreAudio list:

An application that is looking for low latency IO should set this value as small as it can keep up with.

On the other hand, apps that don't have large interaction requirements or other reasons for low latency can increase this value to allow the data to be chunked up in larger chunks and reduce the number of times per second the IOProc gets called. Note that this does not necessarily lower the total load on the system. In fact, increasing the IO buffer size can have the opposite impact as the buffers are larger which makes them much less likely to fit in caches and what not which can really sap performance.

At the end of the day, the value an app chooses for it's IO size is really something that is dependent on the app and what it does.

paleozogt