views:

1263

answers:

5

I just ran into some unexpected behavior with DateTime.UtcNow while doing some unit tests. It appears that when you call DateTime.Now/UtcNow in rapid succession, it seems to give you back the same value for a longer-than-expected interval of time, rather than capturing more precise millisecond increments.

I know there is a Stopwatch class that would be better suited for doing precise time measurements, but I was curious if someone could explain this behavior in DateTime? Is there an official precision documented for DateTime.Now (e.g. precise to within 50ms?)? Why would DateTime.Now be made less precise than what most CPU clocks could handle? Maybe it's just designed for the lowest common denominator CPU?

public static void Main(string[] args)
{
    var stopwatch = new Stopwatch();
    stopwatch.Start();
    for (int i=0; i<1000; i++)
    {
        var now = DateTime.Now;
        Console.WriteLine(string.Format(
            "Ticks: {0}\tMilliseconds: {1}", now.Ticks, now.Millisecond));
    }

    stopwatch.Stop();
    Console.WriteLine("Stopwatch.ElapsedMilliseconds: {0}",
        stopwatch.ElapsedMilliseconds);

    Console.ReadLine();
}
+2  A: 

From MSDN documentation :

The resolution of this property depends on the system timer.

They also claim that the approximate resolution on Windows NT 3.5 and later is 10 ms :)

Thomas Wanner
+3  A: 

For what it's worth, short of actually checking the .NET source, Eric Lippert provided a comment on this SO question saying that DateTime is only accurate to approx 30 ms. The reasoning for not being nanosecond accurate, in his words, is that it "doesn't need to be."

Scott Anderson
And it could be worse. In VBScript the Now() function rounds the returned result to the nearest second, despie the fact that the value returned has sufficient available precision to be precise to the microsecond. In C#, the structure is called DateTime; it's intended to represent a date and a time for typical real-world non-scientific domains, like when your life insurance expires or how long its been since your last reboot. It's not intended for high-precision sub-second timing.
Eric Lippert
+4  A: 

DateTime's precision is somewhat specific to the system it's being run on. The precision is related to the speed of a context switch, which tends to be around 15 or 16ms. (On my system, it is actually about 14ms from my testing, but I've seen some laptops where it's closer to 35-40ms accuracy.)

Peter Bromberg wrote an article on high precision code timing in C#, which discusses this.

Reed Copsey
+1  A: 

From MSDN you'll find that DateTime.Now has an approximate resolution of 10 milliseconds on all NT operating systems.

The actual precision is hardware dependent. Better precision can be obtained using QueryPerformanceCounter.

Kevin Montrose
+9  A: 

Why would DateTime.Now be made less precise than what most CPU clocks could handle?

A good clock should be both precise and accurate; those are different. As the old joke goes, a stopped clock is exactly accurate twice a day, a clock a minute slow is never accurate at any time. But the clock a minute slow is always precise to the nearest minute, whereas a stopped clock has no useful precision at all.

Why should the DateTime be precise to, say a microsecond when it cannot possibly be accurate to the microsecond? Most people do not have any source for official time signals that are accurate to the microsecond. Therefore giving six digits after the decimal place of precision, the last five of which are garbage would be lying.

Remember, the purpose of DateTime is to represent a date and time. High-precision timings is not at all the purpose of DateTime; as you note, that's the purpose of StopWatch. The purpose of DateTime is to represent a date and time for purposes like displaying the current time to the user, computing the number of days until next Tuesday, and so on.

In short, "what time is it?" and "how long did that take?" are completely different questions; don't use a tool designed to answer one question to answer the other.

Thanks for the question; this will make a good blog article! :-)

Eric Lippert
@Eric Lippert: Raymond Chen has an oldie but goodie on the very topic of the difference between "precision" and "accuracy": http://blogs.msdn.com/oldnewthing/archive/2005/09/02/459952.aspx
Jason
Ok, good point about precision vs. accuracy. I guess I still don't really buy the statement that DateTime is not accurate because "it doesn't have to be." If I have a transactional system, and I want to mark a datetime for each record, to me it seems intuitive to use the DateTime class, but it seems that there are more accurate/precise time components in .NET, so why would DateTime be made less capable. I guess I'll have to do some more reading...
Andy White
OK @Andy, suppose you do have such a system. On one machine you mark a transaction as occurring at January 1st, 12:34:30.23498273. On another machine in your cluster you mark a transaction as occurring at January 1st, 12:34:30.23498456. Which transaction occurred first? Unless you know that the two machines clocks are synchronized to within a microsecond of each other, you have no idea which one occurred first. The extra precision is *misleading garbage*. If I had my way, all DateTimes would be rounded to the nearest second, as they were in VBScript.
Eric Lippert