tags:

views:

39

answers:

2

Hello Good People,

I'm a beginning developer who has done a bit of audio work. I was just wondering what is the approach to starting to work with audio on the iphone platform.

I envision this as somehow getting a vector of numbers which represent the samples in an audio track. Then programmatically running algorithms on that vector (or array) that would act as filters or other DSP functions.

I could be totally off in my thinking, but I guess that is the point of my question. How do I get to this point where I'm able to start coding the audio on a very low level in the iphone environment.

Thanks!

+1  A: 

The aurioTouch example from the Apple site may be of use. The example analysis the audio data and displays the frequency domain of it.

From the example;

The code uses: the AU Remote IO audio unit to get the audio input and copy it to the output the UI presents: - Oscilloscope view of the audio - time domain - frequency domain - Scrolling sonogram of the audio - Mute button to turn on/off the play through

Jeroen de Leeuw
A: 

Both the Audio Queue and the Audio Unit Remote IO APIs will allow you to get a vector of numbers representing the mic input, or play a vector of numbers (PCM samples) to the speaker output. RemoteIO is a slightly more complicated APU, but it allows a lower latency (shorter vectors).

You can't get a vector of numbers in real-time for the currently playing iTunes music on a stock OS device. However there are new APIs for getting and processing the track data in non-real-time.

hotpaw2