Any continuous function (which includes most common math operations) can be well approximated over a bounded interval by a polynomial. This, together with relatively simple identities that common math functions usually satisfy (like addition laws) and table lookups, provides the basis of the standard techniques to construct fast approximation algorithms (and also the basis of high accuracy methods like those used in the system math library).
Taylor series are usually a poor choice, however; Chebyshev or Minimax polynomials have much better error characteristics for most computational uses. The standard technique for fitting minimax polynomials is to use Remes' Algorithm, which is implemented in a lot of commercial math software, or you can roll your own implementation with a day's work if you know what you're doing.
For the record, the "fast inverse square root" should be avoided on modern processors, as it is substantially faster to use a floating-point reciprocal square root estimate instruction (rsqrtss
/rsqrtps
on SSE, vrsqrte
on NEON, vrsqrtefp
on AltiVec). Even the (non-approximate) hardware square root is quite fast on current Intel processors.