views:

645

answers:

3

Handling repeated elements in previous quicksorts

I have found a way to handle repeated elements more efficiently in quicksort and would like to know if anyone has seen this done before. This method greatly reduces the overhead involved in checking for repeated elements which improves performance both with and without repeated elements. Typically, repeated elemnts are handled in a few different ways which I will first enumerate. First, there is the Dutch National Flag method which sort the array like [ < pivot | == pivot | unsorted | > pivot]. Second, there is the method of putting the equal elements to the far left during the sort and then moving them to the center the sort is [ == pivot | < pivot | unsorted | > pivot] and then after the sort the == elements are moved to the center. Third, the Bentley-McIlroy partitioning puts the == elements to both sides so the sort is [ == pivot | < pivot | unsorted | > pivot | == pivot] and then the == elements are moved to the middle. The last two methods are done to try to reduce the overhead.

My Method

Now, let me explain how my method works better because it uses fewer comparisons. I use two quicksort functions together rather than just using one. The first quicksort function I will call q1 and it sorts an array as [ < pivot | unsorted | >= pivot]. The second quicksort function I will call q2 and it sorts the array as [ <= pivot | unsorted | > pivot].

Let's look at how to use these in tandem to improve the handling of repeated elements. We first call q1 to sort the whole array. It picks a pivot which we will call pivot1 and then sorts around pivot1. Thus, our array is sorted to this point as [ < pivot1 | >= pivot1 ]. For the [ < pivot1] partition, we send it to q1 again, and that part is fairly normal so let's sort the other partition first.

For the [ >= pivot1] partition, we send it to q2. q2 choses a pivot, which we will call pivot2, from within this subarray and sorts it into [ <= pivot2 | > pivot2]. Now, if we look at the entire array to this point our sorting looks like [ < pivot1 | >= pivot1 and <= pivot2 | > pivot2]. This is looks very much like a dual-pivot quicksort.

Now, let's return to the subarray inside of q2 ([ <= pivot2 | > pivot2]). For the [ > pivot2] partition, we just send it back to q1 which is not very interesting. For the [ <= pivot2] partition, we first check if pivot1 == pivot2. If they are equal, then this partition is already sorted because they are all equal elements! If the pivots aren't equal, then we just send this partition to q2 again which picks a pivot (pivot3), sorts, and if pivot3 == pivot1, then it doesn't have to sort the [ <= pivot 3 and so on. Hopefully, you get the point by now. The improvement with this technique is that equal elements are handled without having to check if each element is also equal to the pivots. In other words, it uses less comparisons.

There is one other possible improvement that I haven't tried yet which is to check in qs2 if the size of the [ <= pivot2] partition is rather large (or the [> pivot2] partition is very small) in comparison to the size of its total subarray and then to do a more standard check for repeated elements in that case (one of the methods listed above).

Source Code

Here are two very simplified qs1 and qs2 functions. They use the Sedgewick converging pointers method of sorting. They can obviously can be very optimized (they choose pivots extremely poorly for instance), but this is just to show the idea. My own implementation is longer, faster and much harder to read so let's start with this:

// qs sorts into [ < p | >= p ]
void qs1(int a[], long left, long right){
    // Pick a pivot and set up some indicies
    int pivot = a[right], temp;
    long i = left - 1, j = right;
    // do the sort
    for(;;){
        while(a[++i] < pivot);
        while(a[--j] >= pivot) if(i == j) break;
        if(i >= j) break;
        temp = a[i];
        a[i] = a[j];
        a[j] = temp;
    }
    // Put the pivot in the correct spot
    temp = a[i];
    a[i] = a[right];
    a[right] = temp;

    // send the [ < p ] partition to qs1
    if(left < i - 1)
        qs1(a, left, i - 1);
    // send the [ >= p] partition to qs2
    if( right > i + 1)
        qs2(a, i + 1, right);
}

void qs2(int a[], long left, long right){
    // Pick a pivot and set up some indicies
    int pivot = a[left], temp;
    long i = left, j = right + 1;
    // do the sort
    for(;;){
        while(a[--j] > pivot);
        while(a[++i] <= pivot) if(i == j) break;
        if(i >= j) break;
        temp = a[i];
        a[i] = a[j];
        a[j] = temp;
    }
    // Put the pivot in the correct spot
    temp = a[j];
    a[j] = a[left];
    a[left] = temp;

    // Send the [ > p ] partition to qs1
    if( right > j + 1)
        qs1(a, j + 1, right);
    // Here is where we check the pivots.
    // a[left-1] is the other pivot we need to compare with.
    // This handles the repeated elements.
    if(pivot != a[left-1])
            // since the pivots don't match, we pass [ <= p ] on to qs2
        if(left < j - 1)
            qs2(a, left, j - 1);
}

I know that this is a rather simple idea, but it gives a pretty significant improvement in runtime when I add in the standard quicksort improvements (median-of-3 pivot choosing, and insertion sort for small array for a start). If you are going to test using this code, only do it on random data because of the poor pivot choosing (or improve the pivot choice). To use this sort you would call:

qs1(array,0,indexofendofarray);

Some Benchmarks

If you want to know just how fast it is, here is a little bit of data for starters. This uses my optimized version, not the one given above. However, the one given above is still much closer in time to the dual-pivot quicksort than the std::sort time. On highly random data with 2,000,000 elements, I get these times (from sorting several consecutive datasets):

std::sort - 1.609 seconds
dual-pivot quicksort - 1.25 seconds
qs1/qs2 - 1.172 seconds

where std::sort is the C++ Standard Library sort, the dual-pivot quicksort is one that came out several months ago by Vladimir Yaroslavskiy, and qs1/qs2 is my quicksort implementation.

On much less random data. with 2,000,000 elements and generated with rand() % 1000 (which means that each element is has roughly 2000 copies) the times are:

std::sort - 0.468 seconds
dual-pivot quicksort - 0.438 seconds
qs1/qs2 - 0.407 seconds

There are some instances where the dual-pivot quicksort wins out and I do realize that the dual-pivot quicksort could be optimized more, but the same could be said for my quicksort.

Anyone seen this before?

I know this is a long question/explanation, but have any of you seen this improvement before? If so, then why isn't it being used?

A: 

It's a great improvment and I'm sure it's been implemented specifically if you expect a lot of equal objects. There are many of the wall tweeks of this kind.

If I understand all you wrote correctly, the reason it's not generally "known" is that it does improve the basic O(n2) performance. That means, double the number of objects, quadruple the time. Your improvement doesn't change this unless all objects are equal.

Martin
I think you mean "it doesn't improved the basic O(n2) performance"
David Oneill
I think that you're missing the point. To have a decent quicksort, you need to be able to handle repeated elements. The current ways of doing this add much more to the quicksort than my method. The O(N^2) worst case performance comes from repeated elements and/or choosing bad pivots. This improvement tackles the repeated elements part and a median-of-3 pivot choosing method or picking random pivots can help with choosing better pivots.
Justin Peel
I think you meant 'I think you mean "it doesn't improve the basic O(n2) performance"'
littlegreen
n^2 is just worst case, without much practical consequences. As I have to run it on a real machine, where c1*O(n^2) = c2*O(n log n), I want to know the constants!
Stephan Eggermont
yes I agree, O(n2) is of little practical value, but that's my theory on why you don't find this type of improvement published. Well, actually, it might be because there are other sorting methods which might be more interesting to improve on. I do like the way you're handling repeated elements.
Martin
By the way, with a randomly chosen pivots, this method makes worst case behavior extremely rare. It basically eradicates it if I check explicitly for repeated elements when the [ > pivot] partition of qs2 has a size < 2.
Justin Peel
+2  A: 

Haven't seen it before, looks interesting. Can we see some benchmark numbers for the optimized versions? (also sorting smallest subrange first?)

Stephan Eggermont
Those benchmarks were for my optimized version. The optimized version has insertion sort for small subarrays, using the pivot space in the array to reduce the cost of swapping, unwrapping the first go-through of the loop to reduce the total number of comparisons (ask if you're really curious), and median-of-3 pivot choosing.
Justin Peel
A: 

std:sort is not exactly fast.

Here are results I get comparing it to randomized parallel nonrecursive quicksort:

pnrqSort (longs): .:.1 000 000 36ms (items per ms: 27777.8)

.:.5 000 000 140ms (items per ms: 35714.3)

.:.10 000 000 296ms (items per ms: 33783.8)

.:.50 000 000 1s 484ms (items per ms: 33692.7)

.:.100 000 000 2s 936ms (items per ms: 34059.9)

.:.250 000 000 8s 300ms (items per ms: 30120.5)

.:.400 000 000 12s 611ms (items per ms: 31718.3)

.:.500 000 000 16s 428ms (items per ms: 30435.8)

std::sort(longs) .:.1 000 000 134ms (items per ms: 7462.69)

.:.5 000 000 716ms (items per ms: 6983.24)

std::sort vector of longs

1 000 000 511ms (items per ms: 1956.95)

2 500 000 943ms (items per ms: 2651.11)

Since you have extra method it is going to cause more stack use which will ultimately slow things down. Why median of 3 is used, I don't know, because it's a poor method, but with random pivot points quicksort never has big issues with uniform or presorted data and there's no danger of intentional median of 3 killer data.

Charles Eli Cheese
Yes, I've thought about using other pivot choosing methods including randomized pivots. That isn't the point. Also, notice that yours is both nonrecursive and parallel. Of course it will be faster! I used recursion because it is much simpler to implement and easier for people to quickly understand. My method can be made both nonrecursive and parallel as well. Yes, std::sort is not the fastest, but it provides a common function for comparison. The dual-pivot quicksort, however, is quite fast for being recursive and serial.
Justin Peel
So what is the point exactly? Apparently none, to downvote my response for no reason. Using two pivot methods will be twice the stack overhead, as I pointed out, and also as I pointed out it doesn't gain you anything over existing methods, so what's the point? Apparently none, just as with the question itself.
Charles Eli Cheese
I downvoted your response because you were comparing apples to oranges. Comparing a nonrecursive, parallel quicksort with a recursive, serial quicksort is meaningless. You didn't even specify how many processors were being used. Using two different pivot methods does not double the number of stack calls - it is the same number of calls as using your basic quicksort. If you add the total number of stack calls to each function (qs1 and qs2) and to a basic quicksort you should get the same number. Also, maybe you missed that this can be used to improve nonrecursive and parallel methods.
Justin Peel
Actually, the number of calls to qs1 and qs2 combined will be less than the stack calls in a basic quicksort when there is repeated data.
Justin Peel