I am looking at Audio Queue Services document specifically on the following code:
// Writing an audio queue buffer to disk
AudioFileWritePackets ( // 1
pAqData->mAudioFile, // 2
false, // 3
inBuffer->mAudioDataByteSize, // 4
inPacketDesc, // 5
pAqData->mCurrentPacket, // 6
&inNumPackets, // 7
inBuffer->mAudioData // 8
);
inBuffer->mAudioDataByteSize is the number of bytes of audio data being written. inBuffer->mAudioData is the new audio data to write to the audio file.
Assuming the sample rate is 44100.
AudioStreamBasicDescription mDataFormat;
mDataFormat.mSampleRate = 44100.0f;
mDataFormat.mBitsPerChannel = 16;
...
NSInteger numberSamples = inBuffer->mAudioDataByteSize / 2;
SInt16 *audioSample = (SInt16 *)inBuffer->mAudioData;
I use core-plot to plot the above where x axis is number of sample [1 .. numberSamples] and the y axis is audioSample[0] .. audioSample[numberSamples]. I can see the chart in "real-time" where the y axis goes up and down depending the loudness of my voice.
Beginner questions:
- What does the audioSample represent? What am I looking at here?
- What is the unit of audioSample?
What do I need to do if I just want to plot the range between 50 - 100 Hz?
Thanks in advance for your help.