+3  A: 

Divide the interval between minimum and maximum of your data into (say) 1000 bins and calculate a histogram. Then build partial sums and see where they first exceed 5000 or 95000.

Henrik
Nice ... quicksort, and cut off the top and bottom 5000. Without knowing the distribution don't know how you could do better.
John at CashCommons
Bucket sort is more appropriate for this.
Brian
This sounds eminently practical, albeit not always effective. A few extreme outliers could really distort your bins...
Eamon Nerbonne
A: 

Not an expert, but my memory suggests:

  • to determine percentile points exactly you need to sort and count
  • taking a sample from the data and calculating the percentile values sounds like a good plan for decent approximation if you can get a good sample
  • if not, as suggested by Henrik, you can avoid the full sort if you do the buckets and count them
Unreason
+4  A: 

You could estimate your percentiles from just a part of your dataset, like the first few thousand points.

The Glivenko–Cantelli theorem ensures that this would be a fairly good estimate, if you can assume your data points to be independent.

Jens
Unfortunately, the data points are not independant, they are sorted by external criteria - but I could iterate in random order. I don't understand how the linked theorem would practically let me estimate the percentiles - can you give an example, e.g. for the normal distribution?
Eamon Nerbonne
@Eamon: The linked theorem simply states, that the empirical distribution function (which you'd use implicitely when calculating percentiles based on data) is a good estimate for the real distribution. You don't have to use it actually =)
Jens
Ahh, OK, I see what you mean :-)
Eamon Nerbonne
+1  A: 

There are a couple basic approaches I can think of. First is to compute the range (by finding the highest and lowest values), project each element to a percentile ((x - min) / range) and throw out any that evaluate to lower than .05 or higher than .95.

The second is to compute the mean and standard deviation. A span of 2 standard deviations from the mean (in both directions) will enclose 95% of a normally-distributed sample space, meaning your outliers would be in the <2.5 and >97.5 percentiles. Calculating the mean of a series is linear, as is the standard dev (square root of the sum of the difference of each element and the mean). Then, subtract 2 sigmas from the mean, and add 2 sigmas to the mean, and you've got your outlier limits.

Both of these will compute in roughly linear time; the first one requires two passes, the second one takes three (once you have your limits you still have to discard the outliers). Since this is a list-based operation, I do not think you will find anything with logarithmic or constant complexity; any further performance gains would require either optimizing the iteration and calculation, or introducing error by performing the calculations on a sub-sample (such as every third element).

KeithS
The first suggestion isn't throwing out the outer 5th percentiles but doing something based on the most extreme outliers which is highly unstable. The second suggestion makes the assumption that the data is normally distributed, which it explicitly is not.
Eamon Nerbonne
+2  A: 

I used to identify outliers by calculating the standard deviation. Everything with a distance more as 2 (or 3) times the standard deviation from the avarage is an outlier. 2 times = about 95%.

Since your are calculating the avarage, its also very easy to calculate the standard deviation is very fast.

You could also use only a subset of your data to calculate the numbers.

GvS
The data is not normally distributed.
Eamon Nerbonne
+2  A: 

The histogram solution from Henrik will work. You can also use a selection algorithm to efficiently find the k largest or smallest elements in an array of n elements in O(n). To use this for the 95th percentile set k=0.05n and find the k largest elements.

Reference:

http://en.wikipedia.org/wiki/Selection_algorithm#Selecting_k_smallest_or_largest_elements

Spike Gronim
Right, that's what I was looking for - a selection algorithm!
Eamon Nerbonne
+3  A: 

According to it's creator a SoftHeap can be used to:

compute exact or approximate medians and percentiles optimally. It is also useful for approximate sorting...

Eugen Constantin Dinca
+1 Hmm, sounds interesting!
Eamon Nerbonne
@Eamon the whole idea behind the SoftHeap and it's applications are really cool.
Eugen Constantin Dinca
+1  A: 

A good general answer to your problem seems to be RANSAC. Given a model, and some noisy data, the algorithm efficiently recovers the parameters of the model.
You will have to chose a simple model that can map your data. Anything smooth should be fine. Let say a mixture of few gaussians. RANSAC will set the parameters of your model and estimate a set of inliners at the same time. Then throw away whatever doesn't fit the model properly.

Ugo
I've got a set of numbers - not some complex model - RANSAC looks like it'd be slow and error prone and than for such a simple case better solutions exist.
Eamon Nerbonne
A: 

One set of data of 100k elements takes almost no time to sort, so I assume you have to do this repeatedly. If the data set is the same set just updated slightly, you're best off building a tree (O(N log N)) and then removing and adding new points as they come in (O(K log N) where K is the number of points changed). Otherwise, the kth largest element solution already mentioned gives you O(N) for each dataset.

Rex Kerr
+1  A: 

You could filter out 2 or 3 standard deviation even if the data is not normally distributed; at least, it will be done in a consistent manner, that should be important.

As you remove the outliers, the std dev will change, you could do this in a loop until the change in std dev is minimal. Whether or not you want to do this depends upon why are you manipulating the data this way. There are major reservations by some statisticians to removing outliers. But some remove the outliers to prove that the data is fairly normally distributed.

TheOutlier
If data is located mostly in the extremes - i.e. the opposite of normal, if you will - then this approach may remove large sets of data. I really don't want to remove more than small fraction of the data, and preferably only that when these are outliers. I'm suppressing outliers because they're distracting - they're just cropped from the visualization, not from the actual data.
Eamon Nerbonne
By definition, only a small fraction of your data can be in the extremes. Per Chebyshev's inequality, only 1/9th of your distribution can be more than 3 standard deviations away; only 1/16th can be 4 deviations away. And those limits are only reached in the degenerate case where your distribtuion is just two spikes. So, calculating the deviation in O(N) is a valid and efficient way to filter outliers.
MSalters