You're looking in entirely the wrong places. The "overhead" of a document/view architecture is in the nanosecond range (basically, accessing data via a pointer).
For comparison, the absolute maximum rate at which you can meaningfully update the screen is the monitor's refresh rate, which is typically 60 Hz (i.e., once every 16.67 milliseconds).
To make even that kind of refresh rate meaningful, you can't really change much in any given monitor update -- if you try to change too much, the user won't be able to follow what's going on.
As far as threading goes, by far the simplest method is to do all the actual window updating in one thread, and use the other threads for doing computations and such that generate data for the window being updated. As long as you assure that thread doesn't need to do a lot of computation and such, updating the window as fast as there's any use for is pretty easy.
Edit: As far as C++ vs. C# goes, it depends. I have no doubt at all the you can get entirely adequate display performance from either one. The real question is how much computation you're doing behind those displays. What you've mentioned has been displaying primarily pretty close to raw data (price, volume, etc.) For that, C# will probably be just fine. My guess would be that the people you've talked to are doing considerably more computation than that -- and that's the real Achilles heal of .NET (or almost anything else that runs on a VM). From what I've seen, for really heavy duty computation, C# and such aren't really very competitive.
Just for example, in another answer a while back I mentioned an application I originally wrote in C++, that another team rewrote in C#, which ran about 3 times slower. Since posting that, I was curious and talked with them a bit more about it, asking whether they couldn't improve its speed to be at least close to the same as C++ with a little extra work.
Their reply was, in essence, that they'd already done that extra work, and it was not just "a little". The C# rewrite took something like 3 1/2-4 months. Of that time, it took them less than a month to duplicate the features of the original; the entire rest of the time was spent on (trying to) make it fast enough to be usable.
I hasten to caution that 1) this is only one data point, and 2) I've no idea how close it is to anything you might do. Nonetheless, it does give some idea of the kind of slowdown you could run into when (and if) you start to do real computation rather than just passing data through from the network to the screen. At the same time, a quick look indicates that it's generally in line with the results on the Computer Language Shootout web site -- though keep in mind the results there are for Mono rather than Microsoft's implementation.
At least to me, the real question comes down to this: is your concern for performance really well-founded or not? For something like 90% of the applications around, what's important is simply that the code does what you want it to, and speed of execution matters little, as long as it doesn't get drastically slower (e.g., hundreds or thousands of times slower). If your code falls into that (large) category, C# may very well be a good choice. If you really do have a good reason to be concerned about speed of execution, then it seems to me choosing C# would be a lot more questionable.