tags:

views:

1901

answers:

9

I'm trying to calculate the median of a set of values, but I don't want to store all the values as that could blow memory requirements. Is there a way of calculating or approximating the median without storing and sorting all the individual values?

Ideally I would like to write my code a bit like the following

var medianCalculator = new MedianCalculator();
foreach (var value in SourceData)
{
  medianCalculator.Add(value);
}
Console.WriteLine("The median is: {0}", medianCalculator.Median);

All I need is the actual MedianCalculator code!

Update: Some people have asked if the values I'm trying to calculate the median for have known properties. The answer is yes. One value is in 0.5 increments from about -25 to -0.5. The other is also in 0.5 increments from -120 to -60. I guess this means I can use some form of histogram for each value.

Thanks

Nick

+3  A: 

Usually, sorting the list is probably the easiest and a fast way of doing it. It's of O(n log n). There is a linear time algorithm for this problem which only makes sense for very large inputs. In that algorithm, you can only store median of medians, not the whole list.

Mehrdad Afshari
The question is about neither sorting nor storing input stream.
mouviciel
+1. Mergesort would be perfect for this -- you can sort any number of elements in log(n) passes using disk files, with tiny RAM requirements.
j_random_hacker
@mouciviel: The question is about solving a practical problem. An external-memory sort is a sensible way to do this while meeting the requirements of low memory usage.
j_random_hacker
+12  A: 

If the values are discrete and the number of distinct values isn't too high, you could just accumulate the number of times each value occurs in a histogram, then find the median from the histogram counts (just add up counts from the top and bottom of the histogram until you reach the middle). Or if they're continuous values, you could distribute them into bins - that wouldn't tell you the exact median but it would give you a range, and if you need to know more precisely you could iterate over the list again, examining only the elements in the central bin.

David Zaslavsky
I like this answer. It's a great idea as I know the types of values being stored and can construct a histogram reasonably easily
Nick R
+2  A: 

I don't think it is possible to do without having the list in memory. You can obviously approximate with

  • average if you know that the data is symmetrically distributed
  • or calculate a proper median of a small subset of data (that fits in memory) - if you know that your data has the same distribution across the sample (e.g. that the first item has the same distribution as the last one)
Grzenio
+1  A: 

Find Min and Max of the list containing N items through linear search and name them as HighValue and LowValue Let MedianIndex = (N+1)/2

1st Order Binary Search:

Repeat the following 4 steps until LowValue < HighValue.

  1. Get MedianValue approximately = ( HighValue + LowValue ) / 2

  2. Get NumberOfItemsWhichAreLessThanorEqualToMedianValue = K

  3. is K = MedianIndex, then return MedianValue

  4. is K > MedianIndex ? then HighValue = MedianValue Else LowValue = MedianValue

It will be faster without consuming memory

2nd Order Binary Search:

LowIndex=1 HighIndex=N

Repeat Following 5 Steps until (LowIndex < HighIndex)

  1. Get Approximate DistrbutionPerUnit=(HighValue-LowValue)/(HighIndex-LowIndex)

  2. Get Approximate MedianValue = LowValue + (MedianIndex-LowIndex) * DistributionPerUnit

  3. Get NumberOfItemsWhichAreLessThanorEqualToMedianValue = K

  4. is (K=MedianIndex) ? return MedianValue

  5. is (K > MedianIndex) ? then HighIndex=K and HighValue=MedianValue Else LowIndex=K and LowValue=MedianValue

It will be faster than 1st order without consuming memory

We can also think of fitting HighValue, LowValue and MedianValue with HighIndex, LowIndex and MedianIndex to a Parabola, and can get ThirdOrder Binary Search which will be faster than 2nd order without consuming memory and so on...

lakshmanaraj
This is a fine way to do it, but please mention explicitly that it requires multiple (log(n) in expectation) passes through the data since you won't be keeping NumberOfItemsWhichAreLessThanorEqualToMedianValue[k] in RAM.
j_random_hacker
+4  A: 

This is tricky to get right in general, especially to handle degenerate series that are already sorted, or have a bunch of values at the "start" of the list but the end of the list has values in a different range.

The basic idea of making a histogram is most promising. This lets you accumulate distribution information and answer queries (like median) from it. The median will be approximate since you obviously don't store all values. The storage space is fixed so it will work with whatever length sequence you have.

But you can't just build a histogram from say the first 100 values and use that histogram continually.. the changing data may make that histogram invalid. So you need a dynamic histogram that can change its range and bins on the fly.

Make a structure which has N bins. You'll store the X value of each slot transition (N+1 values total) as well as the population of the bin.

Stream in your data. Record the first N+1 values. If the stream ends before this, great, you have all the values loaded and you can find the exact median and return it. Else use the values to define your first histogram. Just sort the values and use those as bin definitions, each bin having a population of 1. It's OK to have dupes (0 width bins).

Now stream in new values. For each one, binary search to find the bin it belongs to. In the common case, you just increment the population of that bin and continue. If your sample is beyond the histogram's edges (highest or lowest), just extend the end bin's range to include it. When your stream is done, you find the median sample value by finding the bin which has equal population on both sides of it, and linearly interpolating the remaining bin-width.

But that's not enough.. you still need to ADAPT the histogram to the data as it's being streamed in. When a bin gets over-full, you're losing information about that bin's sub distribution. You can fix this by adapting based on some heuristic... The easiest and most robust one is if a bin reaches some certain threshold population (something like 10*v/N where v=# of values seen so far in the stream, and N is the number of bins), you SPLIT that overfull bin. Add a new value at the midpoint of the bin, give each side half of the original bin's population. But now you have too many bins, so you need to DELETE a bin. A good heuristic for that is to find the bin with the smallest product of population and width. Delete it and merge it with its left or right neighbor (whichever one of the neighbors itself has the smallest product of width and population.). Done! Note that merging or splitting bins loses information, but that's unavoidable.. you only have fixed storage.

This algorithm is nice in that it will deal with all types of input streams and give good results. If you have the luxury of choosing sample order, a random sample is best, since that minimizes splits and merges.

The algorithm also allows you to query any percentile, not just median, since you have a complete distribution estimate.

I use this method in my own code in many places, mostly for debugging logs.. where some stats that you're recording have unknown distribution. With this algorithm you don't need to guess ahead of time.

The downside is the unequal bin widths means you have to do a binary search for each sample, so your net algorithm is O(NlogN).

SPWorley
Thanks - this is a good answer, but may be too expensive for my requirements. I need to have lots of medians over a large geographic area, with different medians for each 200m by 200m area.
Nick R
Some good ideas, but I'm stuck on one thing -- when you split a bin in two, how do you decide how many from that bin go into sub-bin #1 and how many go into sub-bin #2? It seems you would need to record every value in the bin (since a bin may subdivide many times).
j_random_hacker
JRH: You split in the middle and assign half of the population to each bin. We're not storing any more information about the inner subbin data distribution to do much else, and the split is mostly to allow better data resolution from now on.
SPWorley
@Arno: Thanks for the clarification. Seems like a lossy but workable approach to me, +1.
j_random_hacker
+5  A: 

Here is a crazy approach that you might try. This is a classical problem in streaming algorithms. The rules are

  1. You have limited memory, say O(log n) where n is the number of items you want
  2. You can look at each item once and make a decision then and there what to do with it, if you store it, it costs memory, if you throw it away it is gone forever.

The idea for the finding a median is simple. Sample O(1/a^2 log(1/p))log(n) elements from the list at random, you can do this via reservoir sampling (see a previous question). Now simply return the median from your sampled elements, using a classical method.

The guarantee is that the index of the item returned will be (1 +/- a)/2 with probability at least 1-p. So there is a probability p of failing, you can choose it by sampling more elements. And it wont return the median or guarantee that the value of the item returned is anywhere close to the median, just that when you sort the list the item returned will be close to the half of the list.

This algorithm uses O(log n) additional space and runs in Linear time.

Pall Melsted
+3  A: 

There is the 'remedian' statistic. It works by first setting up k arrays, each of length b. Data values are fed in to the first array and, when this is full, the median is calculated and stored in the first pos of the next array, after which the first array is re-used. When the second array is full the median of its values is stored in the first pos of the third array, etc. etc. You get the idea :)

It's simple and pretty robust. The reference is here...

http://web.ipac.caltech.edu/staff/fmasci/home/statistics_refs/Remedian.pdf

Hope this helps

Michael

michael
A: 

David's suggestion seems like the most sensible approach for approximating the median.

A running mean for the same problem is a much easier to calculate:

Mn = Mn-1 + ((Vn - Mn-1) / n)

Where Mn is the mean of n values, Mn-1 is the previous mean, and Vn is the new value.

In other words, the new mean is the existing mean plus the difference between the new value and the mean, divided by the number of values.

In code this would look something like:

new_mean = prev_mean + ((value - prev_mean) / count)

though obviously you may want to consider language-specific stuff like floating-point rounding errors etc.

GrahamS
+1  A: 

I use these incremental/recursive mean and median estimators, which both use constant storage:

mean += eta * (sample - mean)
median += eta * sgn(sample - median)

where eta is a small learning rate parameter (e.g. 0.001), and sgn() is the signum function which returns one of {-1, 0, 1}.

This type of incremental mean estimator seems to be used all over the place, e.g. in unsupervised neural network learning rules, but the median version seems much less common, despite its benefits (robustness to outliers). It seems that the median version could be used as a replacement for the mean estimator in many applications.

I would love to see an incremental mode estimator of a similar form...

(Note: I also posted this to a similar topic here: http://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness)

Tyler Streeter