views:

1805

answers:

11

There is something I have never understood. How can a great big PC game like GTA IV use 50% of my CPU and run at 60fps while a DX demo of a rotating Teapot @ 60fps uses a whopping 30% ?

Thanks

+19  A: 

In general, it's because (1) the games are being optimal about what they need to render, and (2) take special advantage of your hardware.

For instance, one easy optimization you can make involves not actually trying to draw things that can't be seen. Consider a complex scene like a cityscape from Grand Theft Auto IV. The renderer isn't actually rendering all of the buildings and structures. Instead, it's rendering only what the camera can see. If you could fly around to the back of those same buildings, facing the original camera, you would see a half-built hollowed-out shell structure. Every point that the camera cannot see is not rendered -- since you can't see it, there's no need to try to show it to you.

Furthermore, optimized instructions and special techniques exist when you're developing against a particular set of hardware, to enable even better speedups.

On the second point, it's common for graphics API examples to fall back to what's called a software renderer when your hardware doesn't support all of the features needed to show a pretty example, like shadows, reflection, ray-tracing, physics, et cetera. This mimics the function of a completely full-featured hardware device which is unlikely to exist, in order to show off all the features of the API. Since the hardware doesn't actually exist, it runs on your CPU instead. That's much more inefficient than delegating to a graphics card.

John Feminella
A DX demo uses your hardware, too. So, what's 'special'?
tur1ng
but a demo is unlikely to be optimal about it.
BioBuckyBall
@tur1ng, the teapot demo, for example, may have enabled *reflection*, *shadows* and other effects.
Nick D
The teapot might have more polygons than a GTA4 scene.The fact is, the current bottleneck in graphic rendering is more texture effects like bump mapping derived techniques to add details and other post rendering effects.
Klaim
@Klaim: That's true. I'm implicitly assuming above that the teapot is comparatively easier to render than the GTA4 scene.
John Feminella
Textures - the teapot is being created from a large number of individual triangles all with normals and lighting interactions. What looks like an insanely complex 3d world in the game is often fairly simple large blocks covered with a detailed picture. A lot of the '3d' is clever shadow and perspective artistic effects in a static 2d image drawn on the 3d shape
Martin Beckett
True, but it does not answer the question - see my answer about vsync ;-)
frunsi
+8  A: 

Because of a few reasons

  • 3D game engines are highly optimized
  • most of the work is done by your graphics adapter
  • 50% Hm, let me guess you have a dual core and only one core is used ;-)

EDIT: To give few numbers

2.8 Ghz Athlon-64 with NV-6800 GPU. The results are:

  • CPU: 72.78 Mflops
  • GPU: 2440.32 Mflops
stacker
Good point about the 50% CPU.
bk1e
@stacker: are you implying that all the computation that take place in top-notch 3D games that are not done by the GPU are actually mono-threaded and would, by some chance, fill 100% of the CPU? Meaning that the game perfs would be bound to one non-GPU core? I find that *very* hard to believe.
Webinator
@WizardOfOdds In the last few years things changed and many of the new games support multicore CPU's http://www.tomshardware.co.uk/forum/94969-25-dual-core-supported-games
stacker
It doesn't imply the program is mono-threaded - it just implies that at least one thread is going as fast as it possibly can. Which is reasonable, because why would you want it to go any slower?On the other hand, many games are almost entirely mono-threaded. It's very difficult to write complex simulations in an effective way when multithreading because the typical situation in concurrent/distributed systems of accepting a little more latency to buy a lot more throughput is no good for a game that is supposed to be responsive.
Kylotan
+2  A: 

The core of any answer should be this -- The transformations that 3D engines perform are mostly specified in additions and multiplications (linear algebra) (no branches or jumps), the operations of a drawing a single frame is often specified in a way that multiple such add-mul's jobs can be done in parallel. GPU cores are very good add add-mul's, and they have dozens or hundreds of add-mull cores.

The CPU is left with doing simple stuff -- like AI and other game logic.

Hassan Syed
+6  A: 

Sometimes a scene may have more going on than it appears. For example, a rotating teapot with thousands of vertices, environment mapping, bump mapping, and other complex pixel shaders all being rendered simultaneously amounts to a whole lot of processing. A lot of times these teapot demos are simply meant to show off some sort of special effect. They also may not always make the best use of the GPU when absolute performance isn't the goal.

In a game you may see similar effects but they're usually done in a compromised fashion in effort to maximize the frame rate. These optimizations extend to everything you see in the game. The issue becomes, "How can we create the most spectacular and realistic scene with the least amount of processing power?" It's what makes game programmers some of the best optimizers around.

Steve Wortham
+3  A: 

In addition, there are many many tricks from an artistic standpoint to save computational power. In many games, especially older ones, shadows are precalculated and "baked" right into the textures of the map. Many times, the artists tried to use planes (two triangles) to represent things like trees and special effects when it would look mostly the same. Fog in games is an easy way to avoid rendering far-off objects, and often, games would have multiple resolutions of every object for far, mid, and near views.

erjiang
+25  A: 

Patience, technical skill and endurance.
First point is that a DX Demo is primarily a teaching aid so it's done for clarity not speed of execution.

It's a pretty big subject to condense but games development is primarily about understanding your data and your execution paths to an almost pathological degree.
1 - your code is designed around two things - your data and your target hardware.
2 - the fastest code is the code that never gets executed - sort your data into batches and only do expensive operations on data you need to
3 - How you store your data is key - aim for contiguous access this allows you to batch process at high speed.
4 - Parellise everything you possibly can
5 - Modern CPUs are fast, modern RAM is very slow. Cache misses are deadly.
6 - Push as much to the GPU as you can - it has fast local memory so can blaze through the data but you need to help it out by organising your data correctly.
7 - Avoid doing lots of renderstate switches ( again batch similar vertex data together ) as this causes the GPU to stall
8 - Swizzle your textures and ensure they are powers of two - this improves texture cache performance on the GPU.
9 Use LODing as much as you can i.e low/medium/high versions of 3D models and switch based on distance from camera player - no point rendering a high-res version if it's only 5 pixels on screen.

zebrabox
(+1) but he didn't ask how to develop for performance :P
Hassan Syed
+20  A: 

3D games are great at tricking your eyes. For example, there is a technique called screen space ambient occlusion (SSAO) which will give a more realistic feel by shadowing those parts of a scene that are close to surface discontinuities. If you look at the corners of your wall, you will see they appear slightly darker than the centers in most cases.

The very same effect can be achieved using radiosity, which is based on rather accurate simulation. Radiosity will also take into account more effects of bouncing lights, etc. but it is computationally expensive - it's a ray tracing technique.

This is just one example. There are hundreds of algorithms for real time computer graphics and they are essentially based on good approximations and typically make a lot assumptions. For example, spatial sorting must be chosen very carefully depending on the speed, typical position of the camera as well as the amount of changes to the scene geometry.

These 'optimizations' are huge - you can implement an algorithm efficiently and make it run 10 times faster, but choosing a smart algorithm that produces a similar result ("cheating") can make you go from O(N^4) to O(log(N)).

Optimizing the actual implementation is what makes games even more efficient, but that is only a linear optimization.

mnemosyn
+1 for SSAO -- I hadn't heard of that before.
John Feminella
+3  A: 
  1. Scene management. kd-trees, frustrum culling, bsps, heirarchical bounding boxes, partial visibility sets.
  2. LOD. Switching out lower detail versions to substitute in for far away objects.
  3. Impostors. Like LOD but not even an object just a picture or 'billboard'.
  4. SIMD.
  5. Custom memory management. Aligned memory, less fragmentation.
  6. Custom data structures (ie no STL, relatively minimal templating).
  7. Assembly in places, mainly for SIMD.
Charles Eli Cheese
+8  A: 

Whilst many answers here provide excellent indications of how I will instead answer the simpler question of why

Perhaps the best example (certainly one of the best known) is Id software. They realised very early, in the days of Commander Keen (well before 3D) that coming up with a clever way to achieve something1, even if it relied on modern hardware (in this case an EGA graphics card!) that was graphically superior to the competition that this would make your game stand out. This was true but they further realised that, rather than then having to come up with new games and content themselves they could licence the technology, thus getting income from others whilst being able to develop the next generation of engine and thus leap frog the competition again.

The abilities of these programmers (coupled with business savvy) is what made them rich.

That said it is not necessarily money that motivates such people. It is likely just as much the desire to achieve, to accomplish. The money they earned in the early days simply means that they now have time to devote to what they enjoy. And whilst many have outside interests almost all still program and try to work out ways to do better than the last iteration.

Put simply the person who wrote the teapot demo likely had one or more of the following issues:

  • less time
  • less resources
  • less reward incentive
  • less internal and external competition
  • lesser goals
  • less talent

The last may sound harsh2 but clearly there are some who are better than others, bell curves sometimes have extreme ends and they tend to be attracted to the corresponding extreme ends of what is done with that skill.

The lesser goals one is actually likely to be the main reason. The target of the teapot demo was just that, a demo. But not a demo of the programmers skill3. It would be a demo of one small facet of a (big) OS, in this case DX rendering.

To those viewing the demo it wouldn't mater it it used way more CPU than required so long as it looked good enough. There would be no incentive to eliminate waste when there would be no beneficiary. In comparison a game would love to have spare cycles for better AI, better sound, more polygons, more effects.


  1. in that case smooth scrolling on PC hardware
  2. Likely more than me so we're clear about that
  3. strictly speaking it would have been a demo to his/her manager too, but again the drive here would be time and/or visual quality.
ShuggyCoUk
+5  A: 

Eeeeek!

I know that this question is old, but its exciting that no one has mentioned VSync!!!???

You compared the CPU usage of the game at 60fps to CPU usage of the teapot demo at 60fps.

Isn't it apparent, that both run (more or less) at exactly 60fps? That leads to the answer...

Both apps run with vsync enabled! This means (dumbed-down) that the rendering framerate is locked to the "vertical blank interval" of your monitor. The graphics hardware (and/or driver) will only render at max. 60fps. 60fps = 60Hz (Hz=per second) refresh rate. So you probably use a rather old, flickering CRT or a common LCD display. On a CRT running at 100Hz you will probably see framerates of up to 100Hz. VSync also applies in a similar way to LCD displays (they usually have a refresh rate of 60Hz).

So, the teapot demo may actually run much more efficient! If it uses 30% of CPU time (compared to 50% CPU time for GTA IV), then it probably uses less cpu time each frame, and just waits longer for the next vertical blank interval. To compare both apps, you should disable vsync and measure again (you will measure much higher fps for both apps).

Sometimes its ok to disable vsync (most games have an option in its settings). Sometimes you will see "tearing artefacts" when vsync is disabled.

You can find details of it and why it is used at wikipedia: http://en.wikipedia.org/wiki/Vsync

frunsi
+1, a teapot should run at >10000 fps. I'm quite surprised it wasn't mentioned, too.
Calvin1602
A: 

From what I know of the Unreal series some conventions are broken like encapsulation. Code is compiled to bytecode or directly into machine code depending on the game. Also, objects are rendered and packaged under the form of a meshes and things such as textures, lighting and shadows are precalculated whereas as a pure 3d animation requires this to this real time. When the game is actually running there are also some optimizations such as only rendering only the visible parts of an object and displaying texture detail only when close up. Finally, it's probable that video games are designed to get the best out of a platform at a given time (ex: Intelx86 MMX/SSE, DirectX, ...).

James P.