Nothing can be worse than infinity.
views:
716answers:
12If you keep the algorithm meaningful in any way, O(n!)
is the worst upper bound you can achieve.
Since checking each possibility for a permutations of a set to be sorted will take n!
steps, you can't get any worse than that.
If you're doing more steps than that then the algorithm has no real useful purpose. Not to mention the following simple sorting algorithm with O(infinity)
:
list = someList
while (list not sorted):
doNothing
1 Put your items to be sorted on index cards
2 Throw them into the air on a windy day, a mile from your house.
2 Throw them into a bonfire and confirm they are completely destroyed.
3 Check your kitchen floor for the correct ordering.
4 Repeat if it's not the correct order.
Best case scenerio is O(∞)
Edit above based on astute observation by KennyTM.
A worst case performance of O(∞) might not even make it an algorithm according to some.
An algorithm is just a series of steps and you can always do worse by tweaking it a little bit to get the desired output in more steps than it was previously taking. One could purposely put the knowledge of the number of steps taken into the algorithm and make it terminate and produce the correct output only after X
number of steps have been done. That X
could very well be of the order of O(n2) or O(nn!) or whatever the algorithm desired to do. That would effectively increase its best-case as well as average case bounds.
But your worst-case scenario cannot be topped :)
Bozo sort is a related algorithm that checks if the list is sorted and, if not, swaps two items at random. It has the same best and worst case performances, but I would intuitively expect the average case to be longer than Bogosort. It's hard to find (or produce) any data on performance of this algorithm.
You should do some research into the exciting field of Pessimal Algorithms and Simplexity Analysis. These authors work on the problem of developing a sort with a pessimal best-case (your bogosort's best case is Omega(n), while slowsort (see paper) has a non-polynomial best-case time complexity).
You could make any sort algorithm slower by running your "is it sorted" step randomly. Something like:
- Create an array of booleans the same size as the array you're sorting. Set them all to false.
- Run an iteration of bogosort
- Pick two random elements.
- If the two elements are sorted in relation to eachother (i < j && array[i] < array[j]), mark the indexes of both on the boolean array to true. Overwise, start over.
- Check if all of the booleans in the array are true. If not, go back to 3.
- Done.
Introduction
Intelligent design sort is a sorting algorithm based on the theory of intelligent design.
Algorithm Description
The probability of the original input list being in the exact order it's in is 1/(n!). There is such a small likelihood of this that it's clearly absurd to say that this happened by chance, so it must have been consciously put in that order by an intelligent Sorter. Therefore it's safe to assume that it's already optimally Sorted in some way that transcends our naïve mortal understanding of "ascending order". Any attempt to change that order to conform to our own preconceptions would actually make it less sorted.
Analysis
This algorithm is constant in time, and sorts the list in-place, requiring no additional memory at all. In fact, it doesn't even require any of that suspicious technological computer stuff. Praise the Sorter!
Feedback
Gary Rogers writes:
Making the sort constant in time denies the power of The Sorter. The Sorter exists outside of time, thus the sort is timeless. To require time to validate the sort dimishes the role of the Sorter. Thus... this particular sort is flawed, and can not be attributed to 'The Sorter'.
Heresy!
A sorting algorithm that assumes that the many-worlds interpretation of quantum mechanics is correct:
- Check that the list is sorted. If not, destroy the universe.
At the conclusion of the algorithm, the list will be sorted in the only universe left standing. This algorithm takes worst-case O(N) and average-case O(1) time. In fact, the average number of comparisons performed is 2: there's a 50% chance that the universe will be destroyed on the second element, a 25% chance that it'll be destroyed on the third, and so on.
I had a lecturer who once suggested generating a random array, checking if it was sorted and then checking if the data was the same as the array to be sorted.
Best case O(N) (first time baby!) Worst case O(Never)
My favorite slow sorting algorithm is the stooge sort:
void stooges(long *begin, long *end) {
if( (end-begin) <= 1 ) return;
if( begin[0] < end[-1] ) swap(begin, end-1);
if( (end-begin) > 1 ) {
int one_third = (end-begin)/3;
stooges(begin, end-one_third);
stooges(begin+one_third, end);
stooges(begin, end-one_third);
}
}
The worst case complexity is O(n^(log(3) / log(1.5))) = O(n^2.7095...).
Another slow sorting algorithm is actually named slowsort!
void slow(long *start, long *end) {
if( (end-start) <= 1 ) return;
long *middle = start + (end-start)/2;
slow(start, middle);
slow(middle, end);
if( middle[-1] > end[-1] ) swap(middle-1, end-1);
slow(start, end-1);
}
This one takes O(n ^ (log n)) in the best case... even slower than stoogesort.