views:

2408

answers:

6

Hi...

I want to program a simple audio sequencer on the iphone but I can't get accurate timing. The last days I tried all possible audio techniques on the iphone, starting from AudioServicesPlaySystemSound and AVAudioPlayer and OpenAL to AudioQueues.

In my last attempt I tried the CocosDenshion sound engine which uses openAL and allows to load sounds into multiple buffers and then play them whenever needed. Here is the basic code:

init:

int channelGroups[1];
channelGroups[0] = 8;
soundEngine = [[CDSoundEngine alloc] init:channelGroups channelGroupTotal:1];

int i=0;
for(NSString *soundName in [NSArray arrayWithObjects:@"base1", @"snare1", @"hihat1", @"dit", @"snare", nil])
{
 [soundEngine loadBuffer:i fileName:soundName fileType:@"wav"];
 i++;
}

[NSTimer scheduledTimerWithTimeInterval:0.14 target:self selector:@selector(drumLoop:) userInfo:nil repeats:YES];

In the initialisation I create the sound engine, load some sounds to different buffers and then establish the sequencer loop with NSTimer.

audio loop:

- (void)drumLoop:(NSTimer *)timer
{
for(int track=0; track<4; track++)
{
 unsigned char note=pattern[track][step];
 if(note)
  [soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
}

if(++step>=16)
 step=0;

}

Thats it and it works as it should BUT the timing is shaky and instable. As soon as something else happens (i.g. drawing in a view) it goes out of sync.

As I understand the sound engine and openAL the buffers are loaded (in the init code) and then are ready to start immediately with alSourcePlay(source); - so the problem may be with NSTimer?

Now there are dozens of sound sequencer apps in the appstore and they have accurate timing. I.g. "idrum" has a perfect stable beat even in 180 bpm when zooming and drawing is done. So there must be a solution.

Does anybody has any idea?

Thanks for any help in advance!

Best regards,

Walchy

+4  A: 

NSTimer has absolutely no guarantees on when it fires. It schedules itself for a fire time on the runloop, and when the runloop gets around to timers, it sees if any of the timers are past-due. If so, it runs their selectors. Excellent for a wide variety of tasks; useless for this one.

Step one here is that you need to move audio processing to its own thread and get off the UI thread. For timing, you can build your own timing engine using normal C approaches, but I'd start by looking at CAAnimation and especially CAMediaTiming.

Keep in mind that there are many things in Cocoa that are designed only to run on the main thread. Don't, for instance, do any UI work on a background thread. In general, read the docs carefully to see what they say about thread-safety. But generally, if there isn't a lot of communication between the threads (which there shouldn't be in most cases IMO), threads are pretty easy in Cocoa. Look at NSThread.

Rob Napier
A: 

Thanks for your answer. It brought me a step further but unfortunately not to the aim. Here is what I did:

nextBeat=[[NSDate alloc] initWithTimeIntervalSinceNow:0.1];
[NSThread detachNewThreadSelector:@selector(drumLoop:) toTarget:self withObject:nil];

In the initialisation I store the time for the next beat and create a new thread.

- (void)drumLoop:(id)info
{
    [NSThread setThreadPriority:1.0];

    while(1)
    {
        for(int track=0; track<4; track++)
        {
            unsigned char note=pattern[track][step];
            if(note)
                [soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO];
        }

        if(++step>=16)
            step=0;  

        NSDate *newNextBeat=[[NSDate alloc] initWithTimeInterval:0.1 sinceDate:nextBeat];
        [nextBeat release];
        nextBeat=newNextBeat;
        [NSThread sleepUntilDate:nextBeat];
    }
}

In the sequence loop I set the thread priority as high as possible and go into an infinite loop. After playing the sounds I calculate the next absolute time for the next beat and send the thread to sleep until this time.

Again this works and it works more stable than my tries without NSThread but it is still shaky if something else happens, especially GUI stuff.

Is there a way to get real-time responses with NSThread on the iphone?

Best regards,

Walchy

Walchy
you will give yourself a lot of headaches like this. if you want sample acurate timing you have to get down to the sample level. you have to know what the current sample index you are playing is up to, forget threading. if you instantiate a remoteIO output, it runs on its own thread and the callback is called whenever the output needs samples, the callback gets a timestamp, and there you go, you can work out where you are.
Aran Mulholland
i have written one btw, and its working.
Aran Mulholland
+2  A: 

im doing something similar using remoteIO output. i do not rely on NSTimer. i use the timestamp provided in the render callback to calculate all of my timing. i dont know how acurate the iphone's hz rate is but im sure its pretty close to 44100hz, so i just calculate when i should be loading the next beat based on what the current sample number is.

an example project that uses remote io can be found here have a look at the render callback inTimeStamp argument.

EDIT : Example of this approach working (and on the app store, can be found here)

Aran Mulholland
of course you will need a bpm, but to schedule your next sample playback you will have to find the samples until the next beat (this will be defined by your bpm) and use the current time stamp to do this.
Aran Mulholland
A: 

I thought a better approach for the time management would be to have a bpm setting (120, for example), and go off of that instead. Measurements of minutes and seconds are near useless when writing/making music / music applications.

If you look at any sequencing app, they all go by beats instead of time. On the opposite side of things, if you look at a waveform editor, it uses minutes and seconds.

I'm not sure of the best way to implement this code-wise by any means, but I think this approach will save you a lot of headaches down the road.

Sneakyness
A: 

I ran into the exact same problem. Did you eventually find a solution to your problem?

regards,

sig_n

+2  A: 

I opted to use a RemoteIO AudioUnit and a background thread that fills swing buffers (one buffer for read, one for write which then swap) using the AudioFileServices API. The buffers are then processed and mixed in the AudioUnit thread. The AudioUnit thread signals the bgnd thread when it should start loading the next swing buffer. All the processing was in C and used the posix thread API. All the UI stuff was in ObjC.

IMO, the AudioUnit/AudioFileServices approach affords the greatest degree of flexibility and control.

Cheers,

Ben

Ben Dyer