views:

263

answers:

2

I know that 10 years ago, typical clock precision equaled a system-tick, which was in the range of 10-30ms. Over the past years, precision was increased in multiple steps. Nowadays, there are ways to measure time intervals in actual nanoseconds. However, usual frameworks still return time with a precision of only around 15ms.

My question is, which steps did increase the precision, how is it possible to measure in nanoseconds, and why are we still often getting less-than-microsecond precision (for instance in .NET).

(Disclaimer: It strikes me as odd that this was not asked before, so I guess I missed this question when I searched. Please close and point me to the question in that case, thanks. I believe this belongs on SO and not on any other SOFU site. I understand the difference between precision and accuracy.)

A: 

I literally read a blog post on MSDN about this today, read it here, it covers the topic pretty well. It has an emphasis on C#'s DateTime but it's universally applicable.

Chris
I just read the same. It raised the question, since Eric did not go into detail. His article is only about the basics.
mafutrct
@mafutrct :) Measuring time isn't an exact science, because what is time? Time is defined as the period over which events occur. The atomic clock uses an atomic resonance frequency standard as its timekeeping element making it very accurate. But computers cannot use such accurate measurements so use other methods which are less accurate. This is how, over time, clocks become out of sync.
Chris
@Chris Well yea, but that does not quite answer the question. Computers can provide ns precision for timediffs, so there should be a way to improve the ms precision we usually get. Also, I'd like to know about the ways this ns precision is already (sometimes) achieved.
mafutrct
+3  A: 

It really is a feature of the history of the PC. The original IBM-PC used a chip called the Real Time Clock which was battery backed up (Do you remember needing to change the batteries on these ?) These operated when the machine was powered off and kept the time. The frequency of these was 32.768 kHz (2^15 cycles/second) which made it easy to calculate time on a 16 bit system. This real time clock was then written to CMOS which was available via an interrupt system in older operating systems.

A newer standard is out from Microsoft and Intel called High Precision Event Timer which specifies a clock speed of 10MHz http://www.intel.com/hardwaredesign/hpetspec_1.pdf Even newer PC architectures take this and put it on the Northbridge controller and the HPET can tun at 100MHz or even greater. At 10Mhz we should be able to get a resolution of 100 nano-seconds and at 100MHZ we should be able to get 10 nano-second resolution.

The following operating systems are known not to be able to use HPET: Windows XP, Windows Server 2003, and earlier Windows versions, older Linux versions

The following operating systems are known to be able to use HPET: Windows Vista, Windows 2008, Windows 7, x86 based versions of Mac OS X, Linux operating systems using the 2.6 kernel and FreeBSD.

With a Linux kernel, you need the newer "rtc-cmos" hardware clock device driver rather than the original "rtc" driver

All that said how do we access this extra resolution? I could cut and paste from previous stackoverflow articles, but not - Just search for HPET and you will find the answers on how to get finer timers working

Romain Hippeau