views:

99

answers:

6

Could it be done by keeping a counter to see how many iterations an algorithm goes through, or does the time duration need to be recorded?

A: 

yes.

you can track both, actual performance and number of iterations.

Beth
+2  A: 

Algorithm complexity is defined as (something like:)

the number of operations the algorithm does as a function of it's input length.

So you need to try your algorithm with various input lengths (i.e. for sort - try sorting 10 elements, 100 elements etc.), and count each operation (e.g. assignment, increment, mathematical operation etc.) the algorithm does.

This will give you a good "theoretical" estimation.
If you want real-life numbers on the other hand - use profiling.

Oren A
Also, algorithm complexity is not defined in term of all operations. For example it's typical to assume that integer addition is O(1), while for big integer additions is O(log N) where N is the integer. This means that there isn't ONE algorithm complexity for a given algorithm: it also depends on which operations you are interested in (usually the most costly), which is why Sorting Algorithm Complexity are expressed in term of the number of comparisons required.
Matthieu M.
+1  A: 

Might I suggest using ANTS profiler. It will provide you this kind of detail while you run your app with "experimental" data.

Lucas B
+1  A: 

The best way would be to actually count the number of "operations" performed by your algorithm. The definition of "operation" can vary: for an algorithm such as quicksort, it could be the number of comparisons of two numbers.

You could measure the time taken by your program to get a rough estimate, but various factors could cause this value to differ from the actual mathematical complexity.

casablanca
+1  A: 

The currently accepted won't give you any theoretical estimation, unless you are somehow able to fit the experimentally measured times with a function that approximates them. This answer gives you a manual technique to do that and fills that gap.

You start by guessing the theoretical complexity function of the algorithm. You also experimentally measure the actual complexity (number of operations, time, or whatever you find practical), for increasingly larger problems.

For example, say you guess an algorithm is quadratic. Measure (Say) the time, and compute the ratio of time to your guessed function (n^2):

for n = 5 to 10000 //n: problem size
  long start = System.time()
  executeAlgorithm(n)
  long end = System.time()
  long totalTime = end - start

  double ratio = (double) time / (n * n)
end

. As n moves towards infinity, this ratio...

  • Converges to zero? Then your guess is too low. Repeat with something bigger (e.g. n^3)
  • Diverges to infinity? Then your guess is too high. Repeat with something smaller (e.g. nlogn)
  • Converges to a positive constant? Bingo! Your guess is on the money (at least approximates the theoretical complexity for as large n values as you tried)

Basically that uses the definition of big O notation, that f(x) = O(g(x)) <=> f(x) < c * g(x) - f(x) is the actual cost of your algorithm, g(x) is the guess you put, and c is a constant. So basically you try to experimentally find the limit of f(x)/g(x); if your guess hits the real complexity, this ratio will estimate the constant c.

Dimitris Andreou
+1  A: 

As others have mentioned, the theoretical time complexity is a function of number of cpu operations done by your algorithm. In general processor time should be a good approximation for that modulo a constant. But the real run time may vary because of a number of reasons such as:

  1. processor pipeline flushes
  2. Cache misses
  3. Garbage collection
  4. Other processes on the machine

Unless your code is systematically causing some of these things to happen, with enough number of statistical samples, you should have a fairly good idea of the time complexity of your algorithm, based on observed runtime.

Amit Prakash