views:

886

answers:

2

I'm running into a bit of a problem with AudioTimeStamps on the iPhone. When I'm running my application in the simulator the AudioTimeStamp.mHostTime appears to be in nanoseconds (1,000,000,000th of a second) whereas when running on my device (iPod Touch 2G) the frequency appears to be about 6,000,000th of a second.

It appears that on OS X there is a function (AudioConvertHostTimeToNanos in CoreAudio/CoreAudioTypes.h) to convert HostTime to and from nanoseconds, but this function is not in the iPhone headers.

Is there any way to find out the rate of mHostTime at runtime? or to convert to seconds, nanoseconds or any other unit? Will this value change between software or hardware versions? (like it has between the simulator and my device)

A: 

You need to use the mach_timebase_info structure to figure this out.

struct mach_timebase_info {
    uint32_t numer;
    uint32_t denom;
};

See: http://shiftedbits.org/2008/10/01/mach_absolute_time-on-the-iphone/

The easiest thing to do is simply use the CAHostTimeBase helper class provided for you by Apple -- Developer/Examples/CoreAudio/PublicUtility.

CAHostTimeBase.cpp and CAHostTimeBase.h - Does everything you need it to do.

Roger The Engineer
+3  A: 

There exists the following file:

<mach/mach_time.h>

In this file you'll find a function named mach_absolute_time(). mach_absolute_time() returns an uint64 number that has no defined meaning. Imagine those are ticks, but nowhere is defined how long a single tick is. Only four things are defined:

  1. mach_absolute_time() returns the number of "ticks" since the last boot.
  2. At every boot the tick counter starts at zero.
  3. The tick counter counts strictly upwards (it never goes backwards).
  4. The tick counter only counts ticks while the system is running.

As you can see, the tick counter is somewhat different to the normal system clock. First of all, the system clock does not start at zero when the system is booted, but at the system's best approximation of the current "wall clock time". The normal system clock also is not running strictly upwards, e.g. the system clock might be ahead of time and the system regularly synchronize the system time using NTP (Network Time Protocol). If the system notices that it is ahead of time by two seconds at the next NTP sync, it turns the system clock back by two seconds to correct it. This regularly breaks software, because many programmers rely on the fact that the system time never jumps backwards; but it does and it is allowed to do so. The last difference is that the normal system time won't stop while the system is sleeping, but the tick counter will not increase while the system is sleeping. When the system wakes up again, it is only a couple of ticks ahead of the time it went to sleep.

So how do you convert those ticks into a real "time value"?

The file of above also defines a structure named mach_timebase_info:

struct mach_timebase_info {
        uint32_t        numer;
        uint32_t        denom;
};

You can get the correct values for this structure using the function mach_timebase_info(), e.g.

kern_return_t kerror;
mach_timebase_info_data_t tinfo;

kerror = mach_timebase_info(&tinfo);
if (kerror != KERN_SUCCESS) {
    // TODO: handle error
}

KERN_SUCCESS (and possible error codes) are defined in

<mach/kern_return.h>

It is very unlikely for this function to return an error, though, and KERN_SUCCESS is equal to zero, thus you can also directly check for kerror not being zero.

Once you got the info into tinfo, you can use it to calculate a "conversion factor", in case you want to convert this number into a real time unit:

double hTime2nsFactor = (double)tinfo.number / tinfo.denom;

By casting the first number to double, GCC automatically casts the second number to double as well and the result will also be double. Knowing this factor, which seems to be 1.0 on Intel machines, but it can be quite different on PPC machines (and maybe it's different on ARM as well), it is pretty easy to convert host time to nanoseconds and nanoseconds to host time.

uint64_t systemUptimeNS = (uint64_t)(mach_absolute_time() * hTime2nsFactor);

systemUptimeNS contains the number of nanoseconds the system was running (not sleeping) between the last boot and now. If you divide any time in nanoseconds by this factor, you get the number of ticks. This can be very useful for the function mach_wait_until(). Assume you want the current thread to sleep for 800 nanoseconds. Here's how you'd do it:

uint64_t sleepTimeInTicks = (uint64_t)(800 / hTime2nsFactor);
mach_wait_until(mach_absolute_time() + sleepTimeInTicks);

A little tip: If you regularly need to convert time values to ticks, it is usually (depends on CPU) faster to multiply than to divide:

double ns2HTimeFactor = 1.0 / hTime2nsFactor;

Now you can multiply by ns2HTimeFactor instead of dividing by hTime2nsFactor.

Of course it is a waste of time to re-calculate the factors each time you need them. Those factors are constant, they will never change while the system is running. Thus you can calculate them somewhere near the start of the application and keep them around till the application quits again.

In Cocoa I'd recommend to write yourself a static class for everything above. You can calculate the conversion factors for either conversion in the +(void)initialize method of the class. Cocoa guarantees that this method is for sure automatically executed before any message is ever sent to this class, it is for sure only executed once during application run time and it is for sure executed in a thread-safe manner, so you don't have to worry about locking/synchronizing or atomic operations.

Mecki