Hi all
With .Net 4.0 coming up, and the new parallel extensions, I wondered if the CLR will be able to optimize and push some calculations to the GPU? Or if any library which can help with the task exists?
I'm no GPU programming expert at all, so forgive me if this is a silly question. Maybe the CLR doesn't support interfacing to the GPUs instruction-set? Are they too primitive, or simply out of scope?
Thanks in advance.
[EDIT] Just to clarify: I know about CUDA and similar libraries, but I want to know if there's a pure .Net solution, and if so, can it work behind the scenes for you, or do you need to do explicit coding?