views:

215

answers:

4

I've been noticing answers on stack overflow that use terms like these, but I don't know what they mean. What are they called and is there a good resource that can explain them in simple terms?

+4  A: 

Read about Computational Complexity first, then try some books about algorithms like Introduction to Algorithms.

From Wikipedia page:

Big O notation characterizes functions according to their growth rates

If you don't won't to drill into details you can very often approximate algorithm complexity by analizing its code:

void simpleFunction(arg); // O(1) - if number of function instructions is constant and don't depend on number of input size

for (int i=0;i<n;i++) {simpleFunction(element[i]);} // O(n)

for (int i=0;i<n;i++) { // this one runs O(n^2)
    for (int j=0;j<n;j++) {
        simpleFunction(element[i]);
    }
}

for (int i=0;i<n;i*=2) {  // O(lgn)
    simpleFunction(element[i]);
}

Sometimes it is not so simple to estimate function/algorithm big O notation complexity in such cases amortized analysis is used. Above code should serve only as quickstarter.

jethro
+14  A: 

That notation is called Big O notation, and is used as a shorthand to express algorithmic complexity (basically how long a given algorithim will take to run as the input size (n) grows)

Generally speaking, you'll run into the following major types of algorithims:

  1. O(1) - Constant - The length of time that this algorithim takes to complete is not dependent on the number of items that the algorithim has to process.
  2. O(log n) - Logarithmic - The length of time that this algorithim takes to complete is dependent on the number of items that the algorithim has to process. As the input size becomes larger, less time is needed for each new input.
  3. O(n) - Linear - The length of time that this algorithim takes to complete is directly dependent on the number of items that the algorithim has to process. As input size grows, the time it takes grows in equal amounts.
  4. O(n^2) - Polynominal - As input size grows, the time it takes to process the input grows by a larger and larger amount - meaning that large input sizes become prohibitively difficult to solve.
  5. O(2^n) - Exponential - The most complicated types of problems. Time to process increases based on input size to an extreme degree.

Generally you can get a rough gauge of the complexity of an algorithim by looking at how it's used. For example, looking at the following method:

function sum(int[] x) {
    int sum = 0;
    for (int i = 0; i < x.length; i++) {
        sum += x[i];
    } 
    return sum;
}

There's a few things that have to be done here:

  • Initialize a variable called sum
  • Initialize a variable called i
  • For each iteration of i: Add x[i] to sum, add 1 to i, check if i is less than x.length
  • Return sum

There's a few operations that run in constant time here (the first two and the last), since the size of x wouldn't affect how long they take to run. As well, there are some operations that run in linear time (since they are run once for each entry in x). With Big O notation, the algorithim is simplified to the most complex, so this sum algorithim would run in O(n)

Ryan Brunner
Ryan, for number 3 you mean O(n).
Ken Redler
Thanks for catching that :)
Ryan Brunner
I think O(n*ln(n)) is also nothworthy, since this is the complexity of many sorting algorithms
ammoQ
Worth noting is that the actual formula for duration can be complicated by multipliers and such. Big O is just an approximation. Two O(n) functions have the same complexity, but if one takes O(n) time and the other takes O(1000n) time then the latter is a slower function but both scale to the same degree. An O(n^2) function may be better (for small inputs) than the O(1000n) function in such cases.
CodexArcanum
A: 

This is called the Big O notation and is used to quantify the complexity of algorithms.

O(1) means the algorithm takes a constant time no matter how much data there is to process.

O(n) means the algorithm speed grows in a linear way with the amount of data.

and so on...

So the lower the power of n in the O notation the better your algorithm is to solve the problem. Best case is O(1) (n=0). But there is an inherent complexity in many problems so that you won't find such an ideal algorithm in nearly all cases.

jdehaan
A: 

The answers are good so far. The main term to web search on is "Big O notation".

The basic idea behind the math of "someformula is O(someterm)" is that, as your variable goes to infinity, "someterm" is the part of the formula that dominates.

For example, assume you have 0.05*x^3 + 300*x^2 + 200000000*x + 10. For very low sizes of x (x==1 or x==2), that 200000000*x will be by far the biggest part. At that point a plot of the formula will look linear. As you go along, at some point the 300*x^2 part will be bigger. However, if you keep making x even bigger, as big as you care to, the 0.05*x^3 part will be the largest, and will eventually totally outstrip the other parts of the formula. That is where it becomes clear from a graph that you are looking at a cubed function. So we would say that the formula is O(x^3).

T.E.D.