You may find this interesting:
http://www.xbitlabs.com/news/video/display/20091001171332_AMD_Nvidia_PhysX_Will_Be_Irrelevant.html
It is biased ... it's basically an interview with AMD ... but it makes some points which I think are worth considering in your case.
Because of the issues David Seiler pointed out, switching physics engines some time in the future may be a huge/insurmountable problem... particularly if the gameplay is tightly bound to the physics.
So, if you really want hardware accelerated physics in your engine NOW, go for Physx, but be aware that when solutions such as those postulated by AMD in this article become available (they absolutely will but they're not here yet), you will be faced with unpleasant choices:
1) rewrite your engine to use (insert name of new cross-platform hardware accelerated physics engine), potentially changing the dynamics of your game in a Bad Way
2) continue using Physx only, entirely neglecting AMD users
3) try to get Physx to work on AMD GPUs (blech...)
Aside from David's idea of using a CPU physics engine as a fallback (doing twice the work and producing 2 engines which do not behave identically) your only other option is to use pure CPU physics.
However, as stuff like OpenCL becomes mainstream we may see ODE/Bullet/kin starting to incorporate that ... IOW if you code it now with ODE/Bullet/kin you might (probably will eventually) get the GPU acceleration for "free" later on (no changes to your code). It'll still behave slightly differently with the GPU version (an unavoidable problem because of the butterfly effect and differences in floating-point implementation), but at least you'll have the ODE/Bullet/kin community working with you to reduce that gap.
That's my recommendation: use an open source physics library which currently only uses the CPU, and wait for it to make use of GPUs via OpenCL, CUDA, ATI's stream language, etc. Performance will be screaming fast when that happens, and you'll save yourself headaches.