As many people have noted, the high-precision Stopwatch class is designed for answering the question "how long did this take?" whereas the DateTime class is designed for answering the question "when does Doctor Who start?" Use the right tool for the job.
However, there is more to the problem of correctly measuring elapsed time than simply getting the timer right. You've also got to make sure that you're measuring what you really want to measure. For example, consider:
// start the timer
M();
// stop the timer
// start another timer
M();
// stop the timer
Is there going to be a significant difference between the timings of the two calls? Possibly yes. Remember, the first time a method is called the jitter has to compile it from IL into machine code. That takes time. The first call to a method can be in some cases many times longer than every subsequent call put together.
So which measurement is "right"? The first measurement? The second? An average of them? It depends on what you are trying to optimize for. If you are optimizing for fast startup then you care very very much about the jit time. If you are optimizing for number of identical pages served per second on a warmed-up server then you don't care at all about jit time and should be designing your tests to not measure it. Make sure you are measuring the thing you are actually optimizing for.