I'm looking for the one algorithm or data structure which is so unknown yet useful that you think it's a horrible oversight by the computer science or programming community. If only we could all learn this one thing, a lot of good would be done to many future programs.
The best one I can come up with is interpolation search, which only very few programmers know, whereas everyone knows binary search. I think there's little doubt that searching an ordered list quickly is a pretty useful and fundamenteal algorithm.
The two are almost identical to implement - so that's not an issue.
It performs O(log(log(n))) on uniformly distributed data, versus binary searches O(log(n)). That means searching 4 billion numbers requires only 5 probes vs. 32, that's a LOT better!
On non-perfectly uniform data, it still performs really well most of the time. Only when the data is really skewed is it as bad as binary search or worse. It's O(n) worst case when the data is highly skewed, but this is pretty uncommon in most real world situations.
Even still, one can construct a even/odd algorithm to alternate between the two and get the worst case of binary search with the average case of interpolation search to mitigate the extreme situations.
There is really no good reason this is so overlooked by most programmers/libraries.
Can anyone else beat that?