+2  A: 

It seems you are solving wrong task, because AVAudioPlayer capable play only whole audiofile. You should use Audio Queue Service from AudioToolbox framework instead, to play audio on packet-by-packet basis. In fact you need not divide audiofile into real sound packets, you can use any data block like in your own example above, but then you should read received data chuncks using Audiofile Service or Audio File Stream Services functions (from AudioToolbox) and feed them to audioqueue buffers.

If you nevertheless want to divide audiofile into sound packets, you can easily do it with Audiofile Service functions. Audiofile consist of header where its properties like number of packets, samplerate, number of channels etc. are stored, and raw sound data.

Use AudioFileOpenURL to open audiofile and take all its properties with AudioFileGetProperty function. Basicaly you need only kAudioFilePropertyDataFormat and kAudioFilePropertyAudioDataPacketCount properties:

AudioFileID  fileID;    // the identifier for the audio file
CFURLRef     fileURL = ...; // file URL
AudioStreamBasicDescription format; // structure containing audio header info
    UInt64  packetsCount;

AudioFileOpenURL(fileURL, 
    0x01, //fsRdPerm,                       // read only
    0, //no hint
    &fileID
);

UInt32 sizeOfPlaybackFormatASBDStruct = sizeof format;
AudioFileGetProperty (
    fileID, 
    kAudioFilePropertyDataFormat,
    &sizeOfPlaybackFormatASBDStruct,
    &format
);

propertySize = sizeof(packetsCount);
AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataPacketCount, &propertySize, &packetsCount);

Then you can take any range of audiopackets data with:

   OSStatus AudioFileReadPackets (
       AudioFileID                  inAudioFile,
       Boolean                      inUseCache,
       UInt32                       *outNumBytes,
       AudioStreamPacketDescription *outPacketDescriptions,
       SInt64                       inStartingPacket,
       UInt32                       *ioNumPackets,
       void                         *outBuffer
    );
Vladimir
Hi Vladmir,I will take a look on it. I have been reading some stuff about Audio Queue Service, but I haven't had time to spend playing around with it. Would you be able to help me on how I would configure the functions "sendData" and "receiveData" to work with the Queue Service? Or would you be able to provide a good example that uses it? Thanks.
vfn
in this case no need of special data formating because they are sound data, just send/receive like any other binary data. You only should ensure that you have large enough block to begin play - containing all header data. Its size depends on format e.g. 44 bytes for lpcm wav, probably more for compressed formats - don't know exact values. When play you try read successive blocks of audiodata, and if fail (AudioFileReadPackets returns 0 packets), but not all data loaded yet pause till receive more.
Vladimir
Hi Vladimir, Could you help me on how to receive the packets and put them on the queue to be played? What i am doing now is, reading each packet with AudioFileReadPackets, and send them to the client. How can i get this packet and play all the queue?
vfn
If you prefer, i can create another question related to this one and so you can answer it. Thanks!
vfn
plase specify are your speking about data transfer or about playing already received data?
Vladimir
Hi Vladimir, I need help on how to enqueue and play the received packets.....On the peerA I will serialize and use a method to send the packets to peerB. On peerB I will have a method that listens to the connection, waiting for new packets, and when a new packet arrives, it gets the packet, enqueue and start playing when the first packet arrives. So, what info should I send to peerB that would be necessary to create the queue and play the arriving packets? Cheers
vfn
There are 3 ways.1. Not bother about audiodata structure and transfer data from peerA audiofile by blocks of any size, adding them to temp file on peerB, and you can begin play this file when only some KBytes are transfered. (I think this way is best because you can replay file or seek to any position even if it is very large).2. Put transferred data (on peerB) to stream and play them using AudioFileStream functions. (but no seek functionality) 3. (I see you try to implement this case) Read data with audio file Audiofile Service on peerA then transfer parsed packets.
Vladimir
begin new question because comments is not handy for formatted text, and I will add code.
Vladimir
Hi, I think that the first option would be great, if it works for partial files....for example, imagine that peerA is sending the packets and peerB is receiving, and them, peerC join and receive the packets too, but from any part of the file....not necessary form the beginning....would that work? or only the scenario 2 would do it....Let me know what you think and I will create the question based on it. Thanks
vfn
First variant is simplest and most flexible for any case. And in all cases you need audiofiles properties. Having them you can play data from any part, not necessary from begining. Second is least flexible. Third is like first but more complicated and with size limitation.
Vladimir
look at code in new answer
Vladimir
A: 

Apple already has written something that can do this: AUNetSend and AUNetReceive. AUNetSend is an effect AudioUnit that sends audio to an AUNetReceive AudioUnit on another computer.

Unfortunately these AUs are not available on the iPhone, though.

sbooth
Hi sbooth,Thanks for that! I need to controll the way that the messages are sent. So, AUNetSend/Receive don't help with my problem. Thanks again.
vfn
+3  A: 

Here is simplest class to play files with AQ Note that you can play it from any point (just set currentPacketNumber)

#import <Foundation/Foundation.h>
#import <AudioToolbox/AudioToolbox.h>

@interface AudioFile : NSObject {
    AudioFileID                     fileID;     // the identifier for the audio file to play
    AudioStreamBasicDescription     format;
    UInt64                          packetsCount;           
    UInt32                          maxPacketSize;  
}

@property (readwrite)           AudioFileID                 fileID;
@property (readwrite)           UInt64                      packetsCount;
@property (readwrite)           UInt32                      maxPacketSize;

- (id) initWithURL: (CFURLRef) url;
- (AudioStreamBasicDescription *)audioFormatRef;

@end


//  AudioFile.m

#import "AudioFile.h"


@implementation AudioFile

@synthesize fileID;
@synthesize format;
@synthesize maxPacketSize;
@synthesize packetsCount;

- (id)initWithURL:(CFURLRef)url{
    if (self = [super init]){       
        AudioFileOpenURL(
                         url,
                         0x01, //fsRdPerm, read only
                         0, //no hint
                         &fileID
                         );

        UInt32 sizeOfPlaybackFormatASBDStruct = sizeof format;
        AudioFileGetProperty (
                              fileID, 
                              kAudioFilePropertyDataFormat,
                              &sizeOfPlaybackFormatASBDStruct,
                              &format
                              );

        UInt32 propertySize = sizeof (maxPacketSize);

        AudioFileGetProperty (
                              fileID, 
                              kAudioFilePropertyMaximumPacketSize,
                              &propertySize,
                              &maxPacketSize
                              );

        propertySize = sizeof(packetsCount);
        AudioFileGetProperty(fileID, kAudioFilePropertyAudioDataPacketCount, &propertySize, &packetsCount);
    }
    return self;
} 

-(AudioStreamBasicDescription *)audioFormatRef{
    return &format;
}

- (void) dealloc {
    AudioFileClose(fileID);
    [super dealloc];
}



//  AQPlayer.h

#import <Foundation/Foundation.h>
#import "AudioFile.h"

#define AUDIOBUFFERS_NUMBER     3
#define MAX_PACKET_COUNT    4096

@interface AQPlayer : NSObject {
@public
    AudioQueueRef                   queue;
    AudioQueueBufferRef             buffers[AUDIOBUFFERS_NUMBER];
    NSInteger                       bufferByteSize;
    AudioStreamPacketDescription    packetDescriptions[MAX_PACKET_COUNT];

    AudioFile * audioFile;
    SInt64  currentPacketNumber;
    UInt32  numPacketsToRead;
}

@property (nonatomic)               SInt64          currentPacketNumber;
@property (nonatomic, retain)       AudioFile       * audioFile;

-(id)initWithFile:(NSString *)file;
-(NSInteger)fillBuffer:(AudioQueueBufferRef)buffer;
-(void)play;

@end 

//  AQPlayer.m

#import "AQPlayer.h"

static void AQOutputCallback(void * inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
    AQPlayer * aqp = (AQCacheablePlayer *)inUserData;
    [aqp fillBuffer:(AudioQueueBufferRef)inBuffer];
}

@implementation AQPlayer

@synthesize currentPacketNumber;
@synthesize audioFile;

-(id)initWithFile:(NSString *)file{
    if ([self init]){
        audioFile = [[AudioFile alloc] initWithURL:[NSURL fileURLWithPath:file]];
        currentPacketNumber = 0;
        AudioQueueNewOutput ([audioFile audioFormatRef], AQOutputCallback, self, CFRunLoopGetCurrent (), kCFRunLoopCommonModes, 0, &queue);
        bufferByteSize = 4096;
        if (bufferByteSize < audioFile.maxPacketSize) bufferByteSize = audioFile.maxPacketSize; 
        numPacketsToRead = bufferByteSize/audioFile.maxPacketSize;
        for(int i=0; i<AUDIOBUFFERS_NUMBER; i++){
            AudioQueueAllocateBuffer (queue, bufferByteSize, &buffers[i]);
        }
        AudioQueueAddPropertyListener( queue, kAudioQueueProperty_IsRunning, AQPropertyListenerCallback, self);
    }
    return self;
}

-(void) dealloc{
    [audioFile release];
    if (queue){
        AudioQueueDispose(queue, YES);
        queue = nil;
    }
    [super dealloc];
}

- (void)play{
    for (int bufferIndex = 0; bufferIndex < AUDIOBUFFERS_NUMBER; ++bufferIndex){
        [self fillBuffer:buffers[bufferIndex]];
    }
    AudioQueueStart (queue, NULL);

}

-(NSInteger)fillBuffer:(AudioQueueBufferRef)buffer{
    UInt32 numBytes;
    UInt32 numPackets = numPacketsToRead;
    BOOL isVBR = [audioFile audioFormatRef]->mBytesPerPacket == 0 ? YES : NO;
    AudioFileReadPackets(
                         audioFile.fileID,
                         NO,
                         &numBytes,
                         isVBR ? packetDescriptions : 0,
                         currentPacketNumber,
                         &numPackets, 
                         buffer->mAudioData
                         );

    if (numPackets > 0) {
        buffer->mAudioDataByteSize = numBytes;      
        AudioQueueEnqueueBuffer (
                                 queue,
                                 buffer,
                                 isVBR ? numPackets : 0,
                                 isVBR ? packetDescriptions : 0
                                 );


    } 
    else{
        // end of present data, check if all packets are played
        // if yes, stop play and dispose queue
        // if no, pause queue till new data arrive then start it again
    }
    return  numPackets;
}
Vladimir
to play data when begining of file is not available (for peerC in vfn previous comment) you initially should send AudioStreamBasicDescription format;, UInt64 packetsCount;, UInt32 maxPacketSize; to create audiqueue, then feed any segment from file to AudioFileStream with AudioFileStreamParseBytes and in AudioFileStream_PacketsProc you get parsed data to fill AQ buffer.
Vladimir