views:

13412

answers:

17

Most people with a degree in CS will certainly know what Big O stands for. It helps us to measure how (in)efficient an algorithm really is and if you know in what category the problem you are trying to solve lays in you can figure out if it is still possible to squeeze out that little extra performance.*

But I'm curious, how do you calculate or approximate the complexity of your algorithms?

*: but as they say, don't overdo it, premature optimization is the root of all evil, and optimization without a justified cause should deserve that name as well.

+2  A: 

Familiarity with the alogrithms/data structures I use and/or quick glance analysis of iteration nesting. The difficulty is when you call a library function, possibly multiple times - you can often be unsure of whether you are calling the function unnecessarily at times or what implementation they are using. Maybe library functions should have a complexity/efficiency measure, whether that be Big O or some other metric, that is available in documentation or even intellisense.

Graphain
+2  A: 

Break down the algorithm into pieces you know the big O notation for, and combine through big O operators. That's the only way I know of.

For more information, check the Wikipedia page on the subject.

Lasse V. Karlsen
+81  A: 

Big O gives the upper bound for time complexity of an algorithm. It is usually used in conjunction with processing data sets (lists) but can be used elsewhere.

A few examples of how it's used in C code.

Say we have an array of n elements

int array[n];

If we wanted to access the first element of the array this would be O(1) since it doesn't matter how big the array is, it always takes the same constant time to get the first item.

x = array[0];

If we wanted to find a number in the list:

for(int i = 0; i < n; i++){
if(array[i] == numToFind){ return i; }
}

This would be O(n) since at most we would have to look through the entire list to find our number. The Big-O is still O(n) even though we might find our number the first try and run through the loop once because Big-O describes the upper bound for an algorithm (omega is for lower bound and theta is for tight bound).

When we get to nested loops:

for(int i = 0; i < n; i++){
for(int j = i; j < n; j++){
array[j] += 2;
}
}

This is O(n^2) since for each pass of the outer loop ( O(n) ) we have to go through the entire list again so the n's multiply leaving us with n squared.

This is barely scratching the surface but when you get to analyzing more complex algorithms complex math involving proofs comes into play. Hope this familiarizes you with the basics at least though.

DShook
Great explanation! So if someone says his algorithm has a O(n^2) complexity, does it mean he will be using nested loops?
Appu
Not really, any aspect that lead to n squared times will be considered as n^2
Vadi
Thnx DSHook, great explanation I must admit.I am also looking for answer for the same question.
alice7
+6  A: 

Seeing the answers here I think we can conclude that most of us do indeed approximate the order of the algorithm by looking at it and use common sense instead of calculating it with, for example, the master method as we were thought at university. With that said I must add that even the professor encouraged us (later on) to actually think about it instead of just calculating it.

Also I would like to add how it is done for recursive functions:

suppose we have a function like:
(scheme code)

(define (fac n)
  (if (= n 0)
    1
    (* n (fac (- n 1)))))

which recursively calculates the factorial of the given number.

the first step is to try and determine the performance characteristic for the body of the function only in this case, nothing special is done in the body, just a multiplication (or the return of the value 1).

so the performance for the body is: O(1) (constant)

next try and determine this for the number of recursive calls. In this case we have n-1 recursive calls,

so the performance for the recursive calls is: O(n-1) (order is n, as we throw away the insignificant parts)

then put those two together and you then have the performance for the whole recursive function:
1 * (n-1) = O(n)


Peter, to answer your raised issues; the method I describe here actually handles this quite well. But keep in mind that this is still an approximation and not a full mathematically correct answer. The method described here is also one of the methods we were taught at university and if I remember correctly was used for far more advanced algorithms than the factorial I used in this example.
of course it all depends on how well you can estimate the running time of the body of the function and the number of recursive calls, but that is just as true for the other methods.

Sven
+2  A: 

Big O notation is useful because it's easy to work with and hides unnecessary complications and details (for some definition of unnecessary). One nice way of working out the complexity of divide and conquer algorithms is the tree method. Let's say you have a version of quicksort with the median procedure, so you split the array into perfectly balanced subarrays every time.

Now build a tree corresponding to all the arrays you work with. At the root you have the original array, the root has two children which are the subarrays. Repeat this until you have single element arrays at the bottom.

Since we can find the median in O(n) time and split the array in two parts in O(n) time, the work done at each node is O(k) where k is the size of the array. Each level of the tree contains (at most) the entire array so the work per level is O(n) (the sizes of the subarrays add up to n, and since we have O(k) per level we can add this up). There are only log(n) levels in the tree since each time we halve the input.

Therefore we can upper bound the amount of work by O(n*log(n)).

However, Big O hides some details which we sometimes can't ignore. Consider computing the Fibonacci sequence with

a=0;
b=1;
for (i = 0; i <n; i++) {
tmp = b;
b = a + b;
a = tmp;
}

and lets just assume the a and b are BigIntegers in Java or something that can handle arbitrarily large numbers. Most people would say this is an O(n) algorithm without flinching. The reasoning is that you have n iterations in the for loop and O(1) work in side the loop.

But Fibonacci numbers are large, the n-th Fibonacci number is exponential in n so just storing it will take on the order of n bytes. Performing addition with big integers will take O(n) amount of work. So the total amount of work done in this procedure is

1 + 2 + 3 + ... + n = n(n-1)/2 = O(n^2)

So this algorithm runs in quadradic time!

As said earlier, adding two n digit number runs in O(n) time...
Learner
You shouldn't care about how the numbers are stored, it doesn't change that the algorithm grows at an upperbound of O(n).
mikek3332002
+2  A: 

Basically the thing that crops up 90% of the time is just analyzing loops. Do you have single, double, triple nested loops? The you have O(n), O(n^2), O(n^3) running time.

Very rarely (unless you are writing a platform with an extensive base library (like for instance, the .NET BCL, or C++'s STL) you will encounter anything that is more difficult than just looking at your loops (for statements, while, goto, etc...)

Adam
A: 

Sven, I'm not sure that your way of judging the complexity of a recursive function is going to work for more complex ones, such as doing a top to bottom search/summation/something in a binary tree. Sure, you could reason about a simple example and come up with the answer. But i figure you'd have to actually do some math for recursive ones?

Peteter
+11  A: 

Small reminder: the 'big O' notation is used to denote asymptotic complexity (that is, when the size of the problem grows to infinity), and it hides a constant.

This means that between an algorithm in O(n) and one in O(n^2), the fastest is not always the first one (though there always exists a value of n such that for problems of size >n, the first algorithm is the fastest).

Note that the hidden constant very much depends on the implementation!

Also, in some cases, the runtime is not a deterministic function of the 'size' n of the input. Take sorting using quicksort for example: the time needed to sort an array of n elements is not a constant but depends on the starting configuration of the array; there are different time complexities: worst case (usually the simplest to figure out, though not always very meaningful), average case (usually much harder to figure out :-( ).

A good introduction is 'An Introduction to the Analysis of Algorithms' by R. Sedgewick and P. Flajolet.

As you say, 'premature optimization is the root of all evil'... and (if possible) profiling really should always be used when optimizing code. It can even help you determine the complexity of your algorithms.

OysterD
+2  A: 

In addition to using the master method (or one of its specializations), I test my algorithms experimentally. This can't prove that any particular complexity class is achieved, but it can provide reassurance that the mathematical analysis is appropriate. To help with this reassurance, I use code coverage tools in conjunction with my experiments, to ensure that I'm exercising all the cases.

As a very simple example say you wanted to do a sanity check on the speed of the .NET framework's list sort. You could write something like the following, then analyze the results in Excel to make sure they did not exceed an n*log(n) curve.

In this example I measure the number of comparisons, but it's also prudent to examine the actual time required for each sample size. However then you must be even more careful that you are just measuring the algorithm and not including artifacts from your test infrastructure.

int nCmp = 0;
System.Random rnd = new System.Random();

// measure the time required to sort a list of n integers
void DoTest(int n)
{
   List<int> lst = new List<int>(n);
   for( int i=0; i<n; i++ )
      lst[i] = rnd.Next(0,1000);

   // as we sort, keep track of the number of comparisons performed!
   nCmp = 0;
   lst.Sort( delegate( int a, int b ) { nCmp++; return (a<b)?-1:((a>b)?1:0)); }

   System.Console.Writeline( "{0},{1}", n, nCmp );
}


// Perform measurement for a variety of sample sizes.
// It would be prudent to check multiple random samples of each size, but this is OK for a quick sanity check
for( int n = 0; n<1000; n++ )
   DoTest(n);
Eric
+8  A: 

While knowing how to figure out the Big O time for your particular problem is useful, knowing some general cases can go a long way in helping you make decisions in your algorithm.

Here are some of the most common cases, lifted from http://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions:

O(1) - Determining if a number is even or odd; using a constant-size lookup table or hash table

O(logn) - Finding an item in a sorted array with a binary search

O(n) - Finding an item in an unsorted list; adding two n-digit numbers

O(n^2) - Multiplying two n-digit numbers by a simple algorithm; adding two n×n matrices; bubble sort or insertion sort

O(n^3) - Multiplying two n×n matrices by simple algorithm

O(c^n) - Finding the (exact) solution to the traveling salesman problem using dynamic programming; determining if two logical statements are equivalent using brute force

O(n!) - Solving the traveling salesman problem via brute-force search

O(n^n) - Often used instead of O(n!) to derive simpler formulas for asymptotic complexity

Giovanni Galbo
+1  A: 

Less useful generally, i think, but for the sake of completeness there is also a Big-Omega, which defines a lower-bound on an algorithm's complexity, and a Big-Theta, which defines both an upper and lower bound.

Martin
+1  A: 

Don't forget to also allow for space complexities that can also be a cause for concern if one has limited memory resources. So for example you may hear someone wanting a constant space algorithm which is basically a way of saying that the amount of space taken by the algorithm doesn't depend on any factors inside the code.

Sometimes the complexity can come from how many times is something called, how often is a loop executed, how often is memory allocated, and so on is another part to answer this question.

Lastly, big O can be used for worst case, best case, and amortization cases where generally it is the worst case that is used for describing how bad an algorithm may be.

JB King
A: 

If you want to estimate the order of your code empirically rather than by analyzing the code, you could stick in a series of increasing values of n and time your code. Plot your timings on a log scale. If the code is O(x^n), the values should fall on a line of slope n.

This has several advantages over just studying the code. For one thing, you can see whether you're in the range where the run time approaches its asymptotic order. Also, you may find that some code that you thought was order O(x) is really order O(x^2), for example, because of time spent in library calls.

John D. Cook
+2  A: 

I think about it in terms of information. Any problem consists of learning a certain number of bits.

Your basic tool is the concept of decision points and their entropy. The entropy of a decision point is the average information it will give you. For example, if a program contains a decision point with two branches, it's entropy is the sum of the probability of each branch times the log (base 2) of the inverse probability of that branch. That's how much you learn by executing that decision.

For example, an IF statement having two branches, both equally likely, has an entropy of 1/2 * log(2/1) + 1/2 * log(2/1) = 1/2 * 1 + 1/2 * 1 = 1. So its entropy is 1 bit.

Suppose you are searching a table of N items, like N=1024. That is a 10-bit problem because log(1024) = 10 bits. So if you can search it with IF statements that have equally likely outcomes, it should take 10 decisions.

That's what you get with binary search.

Suppose you are doing linear search. You look at the first element and ask if it's the one you want. The probabilities are 1/1024 that it is, and 1023/1024 that it isn't. The entropy of that decision is 1/1024*log(1024/1) + 1023/1024 * log(1024/1023) = 1/1024 * 10 + 1023/1024 * about 0 = about .01 bit. You've learned very little! The second decision isn't much better. That is why linear search is so slow. In fact it's exponential in the number of bits you need to learn.

Suppose you are doing indexing. Suppose the table is pre-sorted into a lot of bins, and you use some of all of the bits in the key to index directly to the table entry. If there are 1024 bins, the entropy is 1/1024 * log(1024) + 1/1024 * log(1024) + ... for all 1024 possible outcomes. This is 1/1024 * 10 times 1024 outcomes, or 10 bits of entropy for that one indexing operation. That is why indexing search is fast.

Now think about sorting. You have N items, and you have a list. For each item, you have to search for where the item goes in the list, and then add it to the list. So sorting takes roughly N times the number of steps of the underlying search.

So sorts based on binary decisions having roughly equally likely outcomes all take about O(N log N) steps. An O(N) sort algorithm is possible if it is based on indexing search.

I've found that nearly all algorithmic performance issues can be looked at in this way.

Mike Dunlavey
Excellent explanation!!
Vadi
+1  A: 

What often gets overlooked is the expected behavior of your algorithms. It doesn't change the Big-O of your algorithm, but it does relate to the statement "premature optimization. . .."

Expected behavior of your algorithm is -- very dumbed down -- how fast you can expect your algorithm to work on data you're most likely to see.

For instance, if you're searching for a value in a list, it's O(n), but if you know that most lists you see have your value up front, typical behavior of your algorithm is faster.

To really nail it down, you need to be able to describe the probability distribution of your "input space" (if you need to sort a list, how often is that list already going to be sorted? how often is it totally reversed? how often is it mostly sorted?) It's not always feasible that you know that, but sometimes you do.

Baltimark
+2  A: 

As to "how do you calculate" Big O, this is part of Computational complexity theory. For some (many) special cases you may be able to come with some simple heuristics (like multiplying loop counts for nested loops), esp. when all you want is any upper bound estimation, and you do not mind if it is too pessimistic - which I guess is probably what your question is about.

If you really want to answer your question for any algorithm the best you can do is to apply the theory. Besides of simplistic "worst case" analysis I have found Amortized analysis very useful in practice.

Suma
+2  A: 

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

that's the whole quote by the way. so that dosnt mean never optimize before being at the optimization stage.

Annerajb