views:

89

answers:

2

I'm having difficulty extracting amplitude data from linear PCM on the iPhone stored in a audio.caf.

My questions are:

  1. Linear PCM stores amplitude samples as 16-bit values. Is this correct?
  2. How is amplitude stored in packets returned by AudioFileReadPacketData()? When recording mono linear PCM, isn't each sample, (in one frame, in one packet) just an array for SInt16? What is the byte order (big endian vs. little endian)?
  3. What does each step in linear PCM amplitude mean physically?
  4. When linear PCM is recorded on the iPhone, is the center point 0 (SInt16) or 32768 (UInt16)? What do the max min values mean in the physical wave form/air pressure?

and a bonus question: Are there sound/air pressure wave forms that the iPhone mic can't measure?

My code follows:

// get the audio file proxy object for the audio
AudioFileID fileID;
AudioFileOpenURL((CFURLRef)audioURL, kAudioFileReadPermission, kAudioFileCAFType, &fileID);

// get the number of packets of audio data contained in the file
UInt64 totalPacketCount = [self packetCountForAudioFile:fileID];

// get the size of each packet for this audio file
UInt32 maxPacketSizeInBytes = [self packetSizeForAudioFile:fileID];

// setup to extract the audio data
Boolean inUseCache = false;
UInt32 numberOfPacketsToRead = 4410; // 0.1 seconds of data
UInt32 ioNumPackets = numberOfPacketsToRead;
UInt32 ioNumBytes = maxPacketSizeInBytes * ioNumPackets;
char *outBuffer = malloc(ioNumBytes);
memset(outBuffer, 0, ioNumBytes);

SInt16 signedMinAmplitude = -32768;
SInt16 signedCenterpoint = 0;
SInt16 signedMaxAmplitude = 32767;

SInt16 minAmplitude = signedMaxAmplitude;
SInt16 maxAmplitude = signedMinAmplitude;

// process each and every packet
for (UInt64 packetIndex = 0; packetIndex < totalPacketCount; packetIndex = packetIndex + ioNumPackets)
{
   // reset the number of packets to get
   ioNumPackets = numberOfPacketsToRead;

   AudioFileReadPacketData(fileID, inUseCache, &ioNumBytes, NULL, packetIndex, &ioNumPackets, outBuffer);

   for (UInt32 batchPacketIndex = 0; batchPacketIndex < ioNumPackets; batchPacketIndex++)
   {
      SInt16 packetData = outBuffer[batchPacketIndex * maxPacketSizeInBytes];
      SInt16 absoluteValue = abs(packetData);

      if (absoluteValue < minAmplitude) { minAmplitude = absoluteValue; }
      if (absoluteValue > maxAmplitude) { maxAmplitude = absoluteValue; }
   }
}

NSLog(@"minAmplitude: %hi", minAmplitude);
NSLog(@"maxAmplitude: %hi", maxAmplitude);

With this code I almost always get a min of 0 and a max of 128! That makes no sense to me.

I'm recording the audio using the AVAudioRecorder as follows:

// specify mono, 44.1 kHz, Linear PCM with Max Quality as recording format
NSDictionary *recordSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
   [NSNumber numberWithFloat: 44100.0], AVSampleRateKey,
   [NSNumber numberWithInt: kAudioFormatLinearPCM], AVFormatIDKey,
   [NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
   [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey,
   nil];

// store the sound file in the app doc folder as calibration.caf
NSString *documentsDir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
NSURL *audioFileURL = [NSURL fileURLWithPath:[documentsDir stringByAppendingPathComponent: @"audio.caf"]];

// create the audio recorder
NSError *createAudioRecorderError = nil;
AVAudioRecorder *newAudioRecorder = [[AVAudioRecorder alloc] initWithURL:audioFileURL settings:recordSettings error:&createAudioRecorderError];
[recordSettings release];

if (newAudioRecorder)
{
   // record the audio
   self.recorder = newAudioRecorder;
   [newAudioRecorder release];

   self.recorder.delegate = self;
   [self.recorder prepareToRecord];
   [self.recorder record];
}
else
{
   NSLog(@"%@", [createAudioRecorderError localizedDescription]);
}

Thanks for any insight you can offer. This is my first project using Core Audio, so feel free to tear apart my approach!

P.S. I have tried to searched the Core Audio list archives, but the request keeps giving an error: ( http://search.lists.apple.com/?q=linear+pcm+amplitude&amp;cmd=Search%21&amp;ul=coreaudio-api )

P.P.S. I have looked at:

http://en.wikipedia.org/wiki/Sound_pressure

http://en.wikipedia.org/wiki/Linear_PCM

http://stackoverflow.com/questions/2698411/amplitude-per-sample-iphone-audio

http://wiki.multimedia.cx/index.php?title=PCM

http://stackoverflow.com/questions/742546/get-the-amplitude-at-a-given-time-within-a-sound-file

http://music.columbia.edu/pipermail/music-dsp/2002-April/048341.html

I have also read the entirety of the Core Audio Overview and most of the Audio Session Programming Guide, but my questions remain.

+1  A: 
  1. If you ask for 16-bit samples in your recording format, then you get 16-bit samples. But other formats do exist in many Core Audio record/play APIs, and in possible caf file formats.

  2. In mono, you just get an array of signed 16-bit ints. You can specifically ask for big or little endian in some of the Core Audio recording APIs.

  3. Unless you want to calibrate for your particular device model's mic or your external mic (and make sure audio processing/AGC is turned off), you might want to consider the audio levels to be arbitrary scaled. Plus the response varies with mic directionality and audio frequency as well.

  4. The center point for 16-bit audio samples is commonly 0 (range about -32k to 32k). No bias.

hotpaw2
+1  A: 

1) the os x/iphone file read routines allow you to determine the sample format, typically one of SInt8, SInt16, SInt32, Float32, Float64, or contiguous 24 bit signed int for LPCM

2) for int formats, MIN_FOR_TYPE represents the max amplitude in the negative phase, and MAX_FOR_TYPE represents the maximum amplitude in the positive. 0 equals silence. floating point formats modulate between [-1...1], with zero as with float. when reading, writing, recording, or working with a specific format, endianness will matter - a file may require a specific format, and you typically want to manipulate the data in the native endianness. some routines in the apple audio file libs allow you to pass a flag denoting source endianness, rather than you manually converting it. CAF is a bit more complicated - it acts like a meta wrapper for one or more audio files, and supports many types.

3) the amplitude representation for lpcm is just a brute-force linear amplitude representation (no conversion/decoding is required to playback, and the amplitude steps are equal).

4) see #2. the values are not related to air pressure, they are related to 0 dBFS; e.g. if you're outputting the stream straight to a DAC, then the int max (or -1/1 if floating point) represents the level at which an individual sample will clip.

Bonus) it, like every ADC and component chain has limits to what it can handle on input in terms of voltage. additionally, the sampling rate defines the highest frequency that may be captured (the highest being half of the sampling rate). the adc may use a fixed or selectable bit depth, but the max input voltage does not generally change when choosing another bit depth.

one mistake you're making at the code level: you're manipulating `outBuffer' as chars - not SInt16

Justin