views:

1380

answers:

2

As the title suggests i would like to know whether it is possible to "WRITE A PROGRAM or ALGORITHM" to find the time complexity of any given program taken as input?

Input : any program(P) [in any language or of a particular language]

Output : time complexity of that program(P).

Have there been any prior attempts to write such a program? Are there any algorithms available for this purpose?

If so please provide the necessary links, references or with any kind of guidance possible.

+24  A: 

No. It's not possible. This is a form of the halting problem.

erickson
+1: Until you can prove the program P halts, you can't prove anything about it's time complexity.
S.Lott
But for a non-turing complete language it might be possible :) (and it's an open question how useful such languages might be)
tonfa
Ignoring the theoretical bs about the halting problem. It would seem that you could make a program that would give you a rough estimate and just assume the program will eventually ends.
corymathews
@corymatthews: No. You can't get a "rough estimate". You either have a program that you can argue for the complexity -- as a mathematical exercise -- and be right. Or you have a program which you cannot prove terminates; in which case you can't do anything more.
S.Lott
If cory's program existed, then even if it only worked for problems known to halt, then it would solve that irritating P=NP question once and for all. Just run it against the "try every single program that exists in pseudo-parallel" algorithm for the travelling salesman problem, and find out whether that has polynomial complexity. If so, P=NP, if not P!=NP. Since P=NP is not solved, no such program exists.
Steve Jessop
+2  A: 

Proving the complexity of an arbitrary algorithm is not possible but in principle you could estimate it:

  1. choose n, the size of the input
  2. run the algorithm
  3. observe t(n), the time needed to run the algorithm for input of size n
  4. choose another n, repeat steps 2 and 3 until you have a lot of data
  5. regress t(n) on n, n^k, log(n), n log(n), n!, or any other term that might be appropriate
  6. choose a term with statistical significance and declare that to be your estimated complexity of the algorithm

There are any number of pitfalls to this approach

  1. This will not prove anything, only estimate
  2. If t(n) gets really large for large n, it will take a long time to collect enough data for your analysis
  3. There are many ways that this approach can be fooled unless you use huge values of n. For example, this algorithm will look like O(1) unless you use astronomical values for n

    sleep for 10 days
    run an O(n log(n)) sorting algorithm
    

And other SO users can come up with many more. But in some cases, like when you do a complexity analysis of an algorithm and want to verify it empirically, this is still a useful approach.

mobrule
The problem is step 3: you can't be certain that the program will *ever* halt for arbitrary input. This is the halting problem. The consequence is that even this estimation method won't, in general, produce a useful measure.
Bob Cross
Worse, there are terminating algorithms that will still take until heat death of the universe to produce their answer. You should probably know this before you start trying to run it and gather data.
S.Lott
True, but if you had a tolerance say MAX t(n) the method could be used to make the estimate for times less than t(n) (Just force kill the process after Max t(n) time). It won't tell you if the program will halt, but you will discover either the running time of the algorithm of get a result of "this program runs so long that it's effectively non-usable for me" .
Streklin
I wouldn't say this isn't useful in general. Most algorithms that you want to benchmark will trivially halt. As mobrule pointed out, this doesn't prove anything about runtimes, but it does provide a useful statistical test.
Nick Johnson
And for practical purposes, asymptotic behaviour is in any case completely, utterly irrelevant. I have literally never written a program where I care about its runtime if that runtime exceeds a year, and a year is well short of the limit as N tends to infinity. If your so-called algorithm doesn't halt for some inputs, then you have worse problems than that you can't measure its apparent complexity (for small N) by this method...
Steve Jessop