tags:

views:

180

answers:

3

Possible Duplicate:
Plain english explanation of Big O

I'd imagine this is probably something taught in classes, but as I a self-taught programmer, I've only seen it rarely.

I've gathered it is something to do with the time, and O(1) is the best, while stuff like O(n^n) is very bad, but could someone point me to a basic explanation of what it actually represents, and where these numbers come from?

+2  A: 

Big O refers to the worst case run-time order. It is used to show how well an algorithm scales based on the size of the data set (n->number of items).

Since we are only concerned with the order, constant multipliers are ignored, and any terms which increase less quickly than the dominant term are also removed. Some examples:

A single operation or set of operations is O(1), since it takes some constant time (does not vary based on data set size).

A loop is O(n). Each element in the data set is looped over.

A nested loop is O(n^2). A nested nested loop is O(n^3), and onward.

Things like binary tree searching are log(n), which is more difficult to show, but at every level in the tree, the possible number of solutions is halved, so the number of levels is log(n) (provided the tree is balanced).

Something like finding the sum of a set of numbers that is closest to a given value is O(n!), since the sum of each subset needs to be calculated. This is very bad.

Adam Shiemke
You can also use this notation to describe the spacial behavior.
Gumbo
-1 Doesn't have to be worst case, In my last year's algorithms class we showed the Big O for worst case, best case, and if we could figure it out, average case.
Malfist
Often Big O notation is average case. We say Interpolation search is O(log log n), but it's worst case is O(n) if the values are far enough apart. http://en.wikipedia.org/wiki/Interpolation_search
Malfist
Big O is sometimes used to show more common cases since the worst case isn't that interesting. If you see O(n^2) without any caveats, you should assume worst case.https://secure.wikimedia.org/wikipedia/en/wiki/Big_O_notation#Family_of_Bachmann.E2.80.93Landau_notations
Adam Shiemke
+3  A: 

It's a way of expressing time complexity.

O(n) means for n elements in a list, it takes n computations to sort the list. Which isn't bad at all. Each increase in n increases time complexity linearly.

O(n^n) is bad, because the amount of computation required to perform a sort (or whatever you are doing) will exponentially increase as you increase n.

O(1) is the best, as it means 1 computation to perform a function, think of hash tables, looking up a value in a hash table has O(1) time complexity.

Tom Gullen
Actually, this is not quite right. Its about expressing the rate at which worst case costs grow. So O(N) means that if the number of data items being processed doubles the worst case time for processing the data will double. Oh and and O(1) doesn't mean "1 computation" it means that the computation costs are constant, regardless of the number of data points. A hash table with no collisions is a good example of this.
torak
A: 

Big O notation as applied to an algorithm refers to how the run time of the algorithm depends on the amount of input data. For example, a sorting algorithm will take longer to sort a large data set than a small data set. If for the sorting algorithm example you graph the run time (vertical-axis) vs the number of values to sort (horizontal-axis), for numbers of values from zero to a large number, the nature of the line or curve that results will depend on the sorting algorithm used. Big O notation is a shorthand method for describing the line or curve.

In big O notation, the expression in the brackets is the function that is graphed. If a variable (say n) is included in the expression, this variable refers to the size of the input data set. You say O(1) is the best. This is true because the graph f(n) = 1 does not vary with n. An O(1) algorithm takes the same amount of time to complete regardless of the size of the input data set. By contrast, the run time of an algorithm of O(n^n) increases with the square of the size of the input data set.

That is the basic idea, for a detailed explanation, consult the wikipedia page titled 'Big O Notation'.

Tom