views:

84

answers:

1

Hi all:

i am reviewing my data structures and algorithm analysis lesson,and i get a question that how to determine to the space complexity of merge sort and quick sort algorithms ?

The depth of recursion is only O(lgn) for linked list merge-sort

The amount of extra storage space needed for contiguous quick sort is O(n).

My thoughts:

The two both use divide-and-conquer strategy,so i guess the space complexity of linked list merge sort should be same as the contiguous quick sort.Actually i opt for O(lgn) because before every iteration or recursion call,the list is divided half.

Thanks for any pointers.

+1  A: 

The worst case depth of recursion for quicksort is not (necessarily) O(log n), because quicksort doesn't divide the data "in half", it splits it around a pivot which may or may not be the median. It's possible to implement quicksort to address this[*], but presumably the O(n) analysis was of a basic recursive quicksort implementation, not an improved version. That would account for the discrepancy between what you say in the blockquote, and what you say under "my thoughts".

Other than that I think your analysis is sound - neither algorithm uses any extra memory other than a fixed amount per level of recursion, so depth of recursion dictates the answer.

Another possible way to account for the discrepancy, I suppose, is that the O(n) analysis is just wrong. Or, "contiguous quicksort" isn't a term I've heard before, so if it doesn't mean what I think it does ("quicksorting an array"), it might imply a quicksort that's necessarily space-inefficient in some sense, such as returning an allocated array instead of sorting in-place. But it would be silly to compare quicksort and mergesort on the basis of the depth of recursion of the mergesort vs. the size of a copy of the input for the quicksort.

[*] Specifically, instead of calling the function recursively on both parts, you put it in a loop. Make a recursive call on the smaller part, and loop around to do the bigger part, or equivalently push (pointers to) the larger part onto a stack of work to do later, and loop around to do the smaller part. Either way, you ensure that the depth of the stack never exceeds log n, because each chunk of work not put on the stack is at most half the size of the chunk before it, down to a fixed minimum (1 or 2 if you're sorting purely with quicksort).

Steve Jessop
I think the not-so-well-chosen-pivot in quick sort might account for the discrepancy.On the other hand,"contiguous quicksort" is exact “quicksorting an array",as for you implementing optimization,i need to read more to understand what you expressed.
Tracy