I've been playing around with Apple's aurioTouch demo which is sample code for their Audio Unit tutorial. This application allows simultaneous input/output from the mic. to speaker. It also renders a stereograph of the inputted sound from the mic.
At a really high-level of this low-level process, the sample code defines an AudioComponent (in this case RemoteIO which allows for simultaneous input/output) and there is a render callback for this Audio Unit. In the callback they do some audio filtering (a DC Rejection Filter) and visualization of the stereograph based on the AudioBuffer sound data from the mic.
My ultimate goal is to create my own custom sound distortion Audio Unit based on the input from the mic. I think the proper way to do this based on the Audio Unit tutorial is to make a second Audio Unit and connect them with an Audio Processing Graph. However, I've read that iOS doesn't allow you to register your own custom Audio Units. My questions are:
- Can I do direct manipulation on the AudioBufferList that I have access to in the render callback from the remoteIO Audio Unit (since they already seem to be doing this and applying an audio filter on it) and create my own custom sound distortion there?
- I've tried assigning the AudioBufferList data to a constant (a value I've seen it hold from a sample run and logging of the AudioBufferList), but it appears to do nothing.