views:

306

answers:

4

How are algorithms analyzed? What makes quicksort have an O(n^2) worst-case performance while merge sort has an O(n log(n)) worst-case performance?

+1  A: 

Both quicksort and merge sort split the array into two, sort each part recursively, then combine the result. Quicksort splits by choosing a "pivot" element and partitioning the array into smaller or greater then the pivot. Merge sort splits arbitrarily and then merges the results in linear time. In both cases a single step is O(n), and if the array size halves each time this would give a logarithmic number of steps. So we would expect O(n log(n)).

However quicksort has a worst case where the split is always uneven so you don't get a number of steps proportional to the logarithmic of n, but a number of steps proportional to n. Merge sort splits exactly into two halves (or as close as possible) so it doesn't have this problem.

Mark Byers
A: 
  • Quick sort has many variants depending on pivot selection
  • Let's assume we always select 1st item in the array as a pivot

If the input array is sorted then Quick sort will be only a kind of selection sort! Because you are not really dividing the array.. you are only picking first item in each cycle

On the other hand merge sort will always divide the input array in the same manner, regardless of its content!

Also note: the best performance in divide and conquer when divisions length are -nearly- equal !

Betamoo
+4  A: 

That's a topic for an entire semester. Ultimately we are talking about the upper bound on the number of operations that must be completed before the algorithm finishes as a function of the size of the input. We do not include the coeffecients (ie 10N vs 4N^2) because for N large enough, it doesn't matter anymore.

How to prove what the big-oh of an algorithm is can be quite difficult. It requires a formal proof and there are many techniques. Often a good adhoc way is to just count how many passes on the data the algorithm makes. For instance, if your algorithm has nested for loops, then for each of N items you must operate N times. That would generally be O(N^2).

As to merge sort, you split the data in half over and over. That takes log2(n). And for each split you make a pass on the data, which gives N log(n).

quick sort is a bit trickier because in the average case it is also n log (n). You have to imagine what happens if your partition splits the data such that every time you get only one element on one side of the partition. Then you will need to split the data n times instead of log(n) times which makes it N^2. The advantage of quicksort is that it can be done in place, and that we usually get closer to N log(n) performance.

frankc
A: 
  1. This is introductory analysis of algorithms course material.

  2. An operation is defined (ie, multiplication) and the analysis is performed in terms of either space or time.

  3. This operation is counted in terms of space or time. Typically analyses are performed as Time being the dependent variable upon Input Size.

Example pseudocode:

foreach $elem in @list

   op();

endfor

There will be n operations performed, where n is the size of @list. Count it yourself if you don't believe me.

To analyze quicksort and mergesort requires a decent level of what is known as mathematical sophistication. Loosely, you solve a discrete differential equation derived from the recursive relation.

Paul Nathan