tags:

views:

196

answers:

4

Seems like NUMA is promising for parallel programming, and if I am not wrong the current latest cpus have built-in support for it, like the i7.

Do you anticipate the CLR to adapt NUMA soon?

EDIT: By this I mean having support for it, and taking advantage of it.

+1  A: 

NUMA is a hardware architecture, not necessarily something that needs adoption in the CLR directly. For details, see the NUMA FAQ.

That being said, there are advantages to making software aware of it's architecture. The folks on the CLR team do seem to be aware of issues with cache coherency, etc, so I would bet that there are some optimizations for this. Also, the design of the scheduler in the task parallel library in C# 4 seems to be promising for taking better advantages of NUMA architectures.

Reed Copsey
+1  A: 

In a sense, NUMA is orthogonal to the CLR's memory model. In other words, the hardware/OS has its method of access, the CLR has its memory model demands, and it is up to the CLR implementer to make the to play nice together. In practice, this is difficult, and there are flaws in the current implementation. But as the CLR already runs on hardware supporting NUMA, I'm not really sure what you mean by "adapt NUMA soon.?

Craig Stuntz
+1  A: 

All the answers here so far are correct in highlighting NUMA as a hardware architecture. An interesting read would be this article by Joe Duffy on concurrency and the CLR.

Andrew Hare
A: 

With NUMA basically you have per processor memory controller. You have that with Intel QuickPath and with AMD HyperTransport. The thing is, as far as I know, currently there are no motherboards neither for i7, nor for Phenom, that would support more than one CPU.

Anyway, this is very low level, that has nothing to do with CLR. It's up to operating system to take advantage of it.

vartec