views:

414

answers:

5

I recently compared some of the physics engine out there for simulation and game development. Some are free, some are opensource, some are commercial (1 is even very commercial $$$$). Havok, Ode, Newton (aka oxNewton), Bullet, PhysX and "raw" build-in physics in some 3D engines.

At some stage I came to conclusion or question: Why should I use anything but NVidia PhysX if I can make use of its amazing performance (if I need it) due to GPU processing ? With future NVidia cards I can expect further improvement independent of the regular CPU generation steps. The SDK is free and it is available for Linux as well. Of course it is a bit of vendor lock-in and it is not opensource.

Whats your view or experience ? If you would start right now with development, would you agree with the above ?

cheers

A: 

if all your code is massively paralelizable, then go for it!

for everything else, GPUs are woefully inadequate.

Javier
He is talking about game physics - the propagation of multiple objects and their interactions with each other simultaneously. This problem is inherently parallel and scales as the number of objects increases. Therefore GPUs are a great choice... except for the vendor lock-in issue.
muusbolla
+1  A: 

The hypothetical benefit of future gfx cards is all well and good, but there will also be future benefits from extra CPU cores too. Can you be sure that future gfx cards will always have spare capacity for your physics?

But probably the best reason, albeit a little vague in this case, is that performance isn't everything. As with any 3rd party library, you may need to support and upgrade that code for years to come, and you're going to want to make sure that the interfaces are reasonable, the documentation is good, and that it has the capabilities that you require.

There may also be more mathematical concerns such as some APIs offering more stable equation solving and the like, but I'll leave comment on that to an expert.

Kylotan
+3  A: 

Disclaimer: I've never used PhysX, my professional experience is restricted to Bullet, Newton, and ODE. Of those three, ODE is far and away my favorite; it's the most numerically stable and the other two have maturity issues (useful joints not implemented, legal joint/motor combinations behaving in undefined ways, &c).

You alluded to the vendor lock-in issue in your question, but it's worth repeating: if you use PhysX as your sole physics solution, people using AMD cards will not be able to run your game (yes, I know it can be made to work, but it's not official or supported by NVIDIA). One way around this is to define a failover engine, using ODE or something on systems with AMD cards. This works, but it doubles your workload. It's seductive to think that you'll be able to hide the differences between the two engines behind a common interface and write the bulk of your game physics code once but most of your difficulties with game physics will be in dealing with the idiosyncrasies of your particular physics engine, deciding on values for things like contact friction and restitution. Those values don't have consistent meanings across physics engines and (mostly) can't be formally derived, so you're stuck finding good-looking, playable values by experiment. With PhysX plus a failover you're doing all that scut work twice.

At a higher level, I don't think any of the stream processing APIs are fully baked yet, and I'd be reluctant to commit to one until, at the very least, we've how the customer reaction Intel's Larrabee shapes peoples' designs.

So far from seeing PhysX as the obvious choice for high-end game development, I'd say it should be avoided unless either you don't think people with AMD cards make up a significant fraction of your player base (highly unlikely) or you have enough coding and QA manpower to test two physics engines (more plausible, though if your company is that wealthy I've heard good things about Havok). Or, I guess, if you've designed a physics game with performance demands so intense that only streaming physics can satisfy you - but in that case, I'd advise you to start a band and let Moore's Law do its thing for a year or two.

David Seiler
+2  A: 

You may find this interesting:

http://www.xbitlabs.com/news/video/display/20091001171332_AMD_Nvidia_PhysX_Will_Be_Irrelevant.html

It is biased ... it's basically an interview with AMD ... but it makes some points which I think are worth considering in your case.

Because of the issues David Seiler pointed out, switching physics engines some time in the future may be a huge/insurmountable problem... particularly if the gameplay is tightly bound to the physics.

So, if you really want hardware accelerated physics in your engine NOW, go for Physx, but be aware that when solutions such as those postulated by AMD in this article become available (they absolutely will but they're not here yet), you will be faced with unpleasant choices:

1) rewrite your engine to use (insert name of new cross-platform hardware accelerated physics engine), potentially changing the dynamics of your game in a Bad Way

2) continue using Physx only, entirely neglecting AMD users

3) try to get Physx to work on AMD GPUs (blech...)

Aside from David's idea of using a CPU physics engine as a fallback (doing twice the work and producing 2 engines which do not behave identically) your only other option is to use pure CPU physics.

However, as stuff like OpenCL becomes mainstream we may see ODE/Bullet/kin starting to incorporate that ... IOW if you code it now with ODE/Bullet/kin you might (probably will eventually) get the GPU acceleration for "free" later on (no changes to your code). It'll still behave slightly differently with the GPU version (an unavoidable problem because of the butterfly effect and differences in floating-point implementation), but at least you'll have the ODE/Bullet/kin community working with you to reduce that gap.

That's my recommendation: use an open source physics library which currently only uses the CPU, and wait for it to make use of GPUs via OpenCL, CUDA, ATI's stream language, etc. Performance will be screaming fast when that happens, and you'll save yourself headaches.

Blake Miller
'and do not behave identically'. A point worth repeating, and one I glossed over in my reply.
David Seiler
A: 

PhysX works with non-nVidia cards, it just doesn't get accelerated. Leaving it in the same position the other engines are to start with. The problem is if you have a physical simulation which is only workable with hardware physics acceleration.

John