tags:

views:

193

answers:

5

Where can I turn for information regarding computing times of mathematical functions? Has any (general) study with any amount of rigor been made?

For instance, the computing time of

constant + constant

generally takes O(1).

Suppose I want to start using math like integrals, and I'd like to get an asymptotic approximation to various integrals. Has there been a standard study of this, or must I take the information I have and figure out my own approximation. I'd be very interested in a standard approach to this, and I'd like to know if it already exists.

Here's my motivation: I'm in the middle of writing a paper that points out the equivalence between NP hard problems and certain types of mathematical equations. It seems that there might be use for a study of math computing times that is generalized like a new science.

EDIT: I guess I'm wondering if there is a standard computational complexity to any given math that cannot be avoided. I'm wondering if anyone has studied this question. I'd love to see what others have tried.

EDIT 2: Wikipedia lists "Computational Complexity Theory" in their encyclopedia, which I think may fit the bill. I'm still wondering if someone who has studied this could affirm this.

+8  A: 

"Standard" math has no notion of algorithmic complexity. That's reserved for computer algorithms.

There are ways to analyze the dynamic behavior of solutions of equations. Things like convergence matter a great deal to mathematicians.

You can ask what the algorithmic complexity of euler integration versus fifth-order Runge-Kutta for integration. They would compare based on number of function evaluations required and time step stability.

But what's the "running time" of the solution to Fermat's Last Theorem? What about the last of David Hilbert's challenge problems? Is the "running time" for those a century and counting? What's your running time for solving a partial differential equation using separation of variables?

When you think about it that way, do you have a better understanding of why people would be put off by your question?

duffymo
Again, I'm wondering if there is a standard compuational complexity to math that cannot be avoided. I'm wondering if anyone has studied this. Thanks for your answer. I guess I asked the wrong question.
Matt Groff
@user389117 - standard (i.e. serious) math does not involve computing things. It is about solving equations analytically, proving theorems, etc. Computional complexity only applies to those "applied" branches of mathematics that deal with computational processes; e.g. algorithms.
Stephen C
I'm sorry for being strongheaded. My main focus was to see if any attempt has been made to standardize the algorithms related with math functions. For instance, if there is a standard computational complexity for given math functions that there is no known way to avoid in order to reach a solution for a given problem. I'd still really like to see if a standard study of the math, or perhaps more precisely the functions, as opposed to algorithms, has been studied.
Matt Groff
I think you have your answer: no, there's no such thing. Mathematics is a big field: number theory, group theory, calculus, statistics, etc. "Standard computational complexity" has no meaning across such disparate areas of thought.
duffymo
@Duffymo: Wikipedia lists "Computational complexity theory", which I think may be the ticket. Thanks so much for your help, though. I'd really like to hear what others have to say regarding this. I'll edit my question...
Matt Groff
+1  A: 

I agree with the comments on this thread, in which the question is unclear. This seems to be mostly due to the incorrect usage of computational complexity terms. I refer to one of the best books' I've got my hands about this subject: Computational Complexity: A Modern Approach. The draft is very thorough, and open for anyone to read/download, and will probably give you some insight into what the other users said.

Some common misconceptions that are covered in the very first chapters of the book: mathematical functions can be unfeasible to compute, the minimum amount of time required to solve a problem (such as proving a theorem), the computational complexity of proving logic systems (such as propositional, first-order, second-order and so on).

Daniel Ribeiro
+3  A: 

Yes, for various mathematical functions, the computational complexity (running time) of computing the function has been studied. This can differ depending on the model of computation.

For example adding two n-bit numbers takes Θ(n) time, multiplying them takes Θ(n log n) time (using the FFT), finding their gcd takes Θ(n2) time with the usual Euclidean algorithm and Θ(n(log n)2 (log log n)) with better algorithms, etc. For more complicated stuff like integrals, obviously it depends on what algorithm you use to do it.

ShreevatsaR
+2  A: 

user389117,

I think that subconsciously you want to deduce the complexity of computing a mathematical type from the form of this mathematical type.

E.g. A math type which concerns the square of the variable (x^2) you think (at least subconsciously) that the complexity of the computation is anologous to x^2 so the complexity should be something like O(n^2) or there is a standard process to deduce the form of complexity from the form of the mathematical equation.

These both are different qualities and one cannot deduce the one quality from the other.

I will give you an example: In papers all algorithms are written in pseudo code and then the scientists deduce the complexity of the pseudo code.

The pseudo code must be inevitably written and then you compute the complexity.

There is no a magical way to have the complexity derived from the form of the thing you want to compute.

Even if you compute the complexity and you find that the form is analogous to the form of the equation computed then I think it would be hard, at least at first place, for you to convert that remark from pseudo-science to science.

Good Luck!

Novemberland
+2  A: 

There isn't a collected body of work, but work on approximating functions comes close. For example, you'd like to know that approximating sin(x) to within an epsilon error can be done in time proportional to some polynomial in log(x) and 1/epsilon. There isn't a general theory here (you should look up information complexity though), and focusing on specific functions might help.

Suresh
What if it's interpolation from a lookup table for a trig function? Sine evaluation is O(1) then.
duffymo
sure. then you've "paid" for the time in space. it's always a tradeoff. the bound I gave was an example of the kind of bound you'd like to see as a pure time bound.
Suresh
It seems that the information context is important. This "tradeoff" seems hard to avoid, and I think that information complexity may help to encapsulate many essential ideas. But I'm starting to think that it has been too complex to determine a theory of the nature I'm looking for at present. However, we may simply be missing some keen observations...
Matt Groff