views:

295

answers:

1

Consider aurioTouch sample application provided by Apple. I wanted to simulate a lengthy processing of the recorded audio and modified the sample by introducing a delay of 0.1 second in the render callback method: PerformThru. This leads to a distorted audio and static noise being played through iPhone's speaker or headphones. The noise is heard even when the mute button in the application is on which essentially outputs silence into AudioUnit output bus.

Can anybody give a detailed explanation what happens on the low level when the host's callback function (in our sample it is PerformThru) does not return in a timely fashion?

Why a callback function that performs poorly makes iPhone playback the noise? I understand that the code in the callback function must be highly optimized. Still I would like to know if it is possible to prevent the noise from happening.

Is it possible to modify aurioTouch sample to make AudioUnit do just the recording and switch the playback off completely?

+2  A: 

If you want to introduce a delay then you need to do this via buffering, not by simply delaying the callback. E.g. for 0.1 s and a 44.1 kHz sample rate you would need to buffer an additional 4410 samples. Initially you would pass 4410 0s and then start passing your buffered (delayed) samples.

Paul R