views:

89

answers:

1

Context: I'm just starting out. I'm not even touching the Direct3D 11 API, and instead looking at understanding the pipeline, etc.

From looking at documentation and information floating around the web, it seems like some calculations are being handled by the application. That, is, instead of simply presenting matrices to multiply to the GPU, the calculations are being done by a math library that operates on the CPU. I don't have any particular resources to point to, although I guess I can point to the XNA Math Library or the samples shipped in the February DX SDK. When you see code like mViewProj = mView * mProj;, that projection is being calculated on the CPU. Or am I wrong?

If you were writing a program, where you can have 10 cubes on the screen, where you can move or rotate cubes, as well as viewpoint, what calculations would you do on the CPU? I think I would store the geometry for the a single cube, and then transform matrices representing the actual instances. And then it seems I would use the XNA math library, or another of my choosing, to transform each cube in model space. Then get the coordinates in world space. Then push the information to the GPU.

That's quite a bit of calculation on the CPU. Am I wrong?

  • Am I reaching conclusions based on too little information and understanding?
  • What terms should I Google for, if the answer is STFW?
  • Or if I am right, why aren't these calculations being pushed to the GPU as well?

EDIT: By the way, I am not using XNA, but documentation notes the XNA Math Library replaces the previous DX Math library. (i see the XNA Library in the SDK as a sheer template library).

+3  A: 

"Am I reaching conclusions based on too little information and understanding?"

Not as a bad thing, as we all do it, but in a word: Yes.

What is being done by the GPU is, generally, dependent on the GPU driver and your method of access. Most of the time you really don't care or need to know (other than curiosity and general understanding).

For mViewProj = mView * mProj; this is most likely happening on the CPU. But it is not much burden (counted in 100's of cycles at the most). The real trick is the application of the new view matrix on the "world". Every vertex needs to be transformed, more or less, along with shading, textures, lighting, etc. All if this work will be done in the GPU (if done on the CPU things will slow down really fast).

Generally you make high level changes to the world, maybe 20 CPU bound calculations, and the GPU takes care of the millions or billions of calculations needed to render the world based on the changes.

In your 10 cube example: You supply a transform for each cube, any math needed for you to create the transform is CPU bound (with exceptions). You also supply a transform for the view, again creating the transform matrix might be CPU bound. Once you have your 11 new matrices you apply the to the world. From a hardware point of view the 11 matrices need to be copied to the GPU...that will happen very, very fast...once copied the CPU is done and the GPU recalculates the world based on the new data, renders it to a buffer and poops it on the screen. So for your 10 cubes the CPU bound calculations are trivial.

Look at some reflected code for an XNA project and you will see where your calculations end and XNA begins (XNA will do everything is possibly can in the GPU).

Rusty
"Generally you make high level changes to the world, maybe 20 CPU bound calculations," : Even for more complex scenes? Would you then redesign your math library (say, pushing CPU bound calculations to the GPU as well)? Or is it simply a flawed design to have a high number of CPU bound calculations? (Also, note my edit, using C++, not C# + XNA).
Using the CPU is not evil: The number of CPU calculations you can perform and still maintain 60 FPS (the minimum target FPS for a good user experience, IMHO) is absolutely insane on a modern dual core CPU and with 64bit, 6 core, 12 path 3.5GHz CPUs available for < $1000usd, it only gets better. Of course allowable % of CPU time is completely up to you. There are several libraries for pushing general calculation off to a GPU see: http://stackoverflow.com/questions/1249892/c-perform-operations-on-gpu-not-cpu-calculate-pi . The use of the GPU as math co-processor is definitely on the move.
Rusty
I don't consider the CPU to be evil, but I just find it odd that calculations related to rendering graphics are being left to the CPU, and then there's a cutting edge trend to move such calculations to the very device that accelerates rendering. Regardless, it's good to have confirmation about the calculations on the CPU. Thanks for taking the time to answer my question.
@zirgen: "odd..left to the cpu"...It has not been that long since you didn't have a choice...everything happened in the CPU. The way something gets done on a give platform is always effected by the history of the platform and some of the time (ok most) history wins out over "The Right Way."...Cheers
Rusty