views:

282

answers:

8

hey there could someone please help me determine the complexity?. An example given in my class was

bubble sort

int main() {
   int a[10] = {10,9,8,7,6,5,4,3,2,1};
   int i,j,temp;

   for (j=0;j<10;j++) {
      for (i=0;i<9;i++) {
         if (a[i] > a[i+1]) {
            temp = a[i];
            a[i] = a[i+1];
            a[i+1] = temp;
         }
      }
   }
   for (i=0;i<10;i++) {
      printf("%d ",a[i]);
   }
}

which had a complexity of O(n^2) because it had two loops of O(n) hence O(n) x O(n).


and they said quicksort has an complexity of O(nlog(n)) .. why is this?

is it because as it goes around a loop it divides a number?

-thanks

+2  A: 

See the analysis on Wikipedia.

Sinan Ünür
A: 

Quicksort is recursive. Simply write out the pseudocode and you can easily derive a recursive formula for each repetition's run time, and then use the Master Theorem to arrive at the final answer.

Thom Smith
Someone just starting complexity isn't likely going to understand what you mean by Master Theorem.
GMan
Though, they are now armed with keywords for which they can google.
Brian Gideon
Au contraire; the Master Theorem is Algorithms 101 material. Moreover, the first Google hit is a clearly written Wikipedia article.
Thom Smith
+9  A: 
John Kugelman
Merge-sort *can* have some rather bad space complexities though -- great for linked lists, but check out combsort on arrays :) Also, see timsort from Python.
pst
If you randomize the pivots, the expected number of comparisons done in Quicksort is _at most_ 2n*ln(n). See http://www.cs.cmu.edu/~odonnell/prob/lecture7.pdf.
Andrew Keeton
+4  A: 

Big-O notation is simply a relationship between the input value (number of elements in your case) and the complexity (time complexity in your case, van also be space complexity).

You're correct about the bubble sort. Because it loops n times inside another loop of n times, the time complexity is O(n2).

Quicksort is slightly different. It does a number of passes which depends on n but, in each case, it manages to put all the values lower than the midpoint on the "left" and all values higher than the midpoint on the "right" - both halves are still unsorted but you know that the all the left elements are less than any of the right elements (let's call this the pivot rule).

This basically chops the workload in half for each sub-loop which leads to average case O(log n). Similar to binary search or balanced trees, any algorithm that divides the workload by a factor for each iteration is O(log n).

Combining the two gives you O(n log n).

This wikipedia page actually shows a nice little graphic on the top right that shows quicksort in action. Since a picture is worth a thousand words (and an animation is worth a thousand pictures), you should look at that for a while to understand.

You'll see that it first divides the workspace in two then swaps elements between the two halves until the pivot rule is met.

Because the workload is divided into two totally separate independent areas, quicksort is ripe for parrallel processing with no resource contention. Provided you have enough processors, as soon as you've partitiond the data into two areas, you can give each area to a separate processor for further partitioning. This is not possible with bubble sort since that sort doesn't give you two independent areas.

paxdiablo
+2  A: 

Actually quicksort is O(n log(n)) in the average case. In the worst case you pick the largest or smallest element as the partition every time and do n + (n -1) + ... 1 = O (n ^ 2).

In the best case (the average case works out to the same big-O) you do n comparisons for the first partition. This makes two calls on problems of size n / 2 and those calls take n / 2 comparisons to partition. This continues so you get n + 2 * (n / 2) + 4 * (n /4) + ... . There are log(n) total terms and each one is n so the whole thing is O(n*log(n)).

As Thon said you can get the same result by applying Master's theorem, but it's probably worth your time to do some examples by hand.

ishanrc
A: 

what would the complexity in this be?

a = 0;
  n = 128;
  while (n > 2) {
     a += n;
     n = n/2;
  }
newcomer
log(n) or log2(n)
aib
log(n). In O() notation you discard additive and multiplying constants, that is, O(log(n)), O(log2(n)), O(log10(n)) are all the same
ptor
+2  A: 

The previous answers describe quicksort and its running time pretty well, but I want to comment on the worst-case running time.

It is true that quicksort in general is O(n log n) in the average case (given a random permutation of inputs) and O(n^2) in the worst case. However, quicksort in general doesn't specify which element to partition around at every step, so the O(n^2) case arises when you pick any element (usually the first or the last) as the pivot without regard to its relation to other elements in the list.

You can implement quicksort to run in worst-case O(n log n) by choosing the pivot wisely. One method is to find the median in O(n) time. (See http://en.wikipedia.org/wiki/Selection%5Falgorithm#Linear%5Fgeneral%5Fselection%5Falgorithm%5F-%5F.22Median%5Fof%5FMedians%5Falgorithm.22) Then if you always partition around the median, the number of times any element is examined is at most O(log n), so that the total running time is O(n log n).

KirarinSnow
A: 

Contrary to the opinion of all people here, the complexity of your program is O(1). You haven't defined what n is. I know, my answer seems a bit daft, but in practice it is often more important to find the bounds of your problem than the complexity of the algorithm. If your dataset will never be bigger than a given size, you may better use a simpler method which has good enough performance/behaviour.

tristopia
For any finite computer, all programs are O(1), true, but that's not a very useful insight. For large datasets, the asymptotic performance typically becomes very important.
David Thornley
No quarrel there, only wanted to point out that it is important to look at the dataset size before engaging in algorithm optimization, you know, the evil of premature optimization thing.
tristopia