views:

53

answers:

2

This problem is a little similar to that solved by reservoir sampling, but not the same. I think its also a rather interesting problem.

I have a large dataset (typically hundreds of millions of elements), and I want to estimate the number of unique elements in this dataset. There may be anywhere from a few, to millions of unique elements in a typical dataset.

Of course the obvious solution is to maintain a running hashset of the elements you encounter, and count them at the end, this would yield an exact result, but would require me to carry a potentially large amount of state with me as I scan through the dataset (ie. all unique elements encountered so far).

Unfortunately in my situation this would require more RAM than is available to me (nothing that the dataset may be far larger than available RAM).

I'm wondering if there would be a statistical approach to this that would allow me to do a single pass through the dataset and come up with an estimated unique element count at the end, while maintaining a relatively small amount of state while I scan the dataset.

The input to the algorithm would be the dataset (an Iterator in Java parlance), and it would return an estimated unique object count (probably a floating point number). It is assumed that these objects can be hashed (ie. you can put them in a HashSet if you want to). Typically they will be strings, or numbers.

+4  A: 

You could use a Bloom Filter for a reasonable lower bound. You just do a pass over the data, counting and inserting items which were definitely not already in the set.

Strilanc
Ah, good idea - I'm kicking myself a little for not thinking of it myself as I am already very familiar with Bloom Filters.
sanity
I was thinking of a Bloom filter, but I think you have to be careful here. If you find something that matches the filter, that means that you may have seen it, or you may have gotten a false positive. If it doesn't match the filter, then you definitely haven't seen it before. The problem is, the more elements you have, the greater the likelihood of a false positive, which will actually decrease your count. I haven't worked this out in detail, but I would be concerned that you might see some odd effects by using a Bloom filter for this problem.
Brian Campbell
Brian, I was thinking that instead of increasing my count by 1.0 every time I see an element I think is new, I increase it by 1.0-P where P is the probability of a false positive (which can be computed quite easily).
sanity
sanity: that seems wrong. I would think, you'd want to increase it by `1.0` if the element is new; and by `P(false_positive)` if the element appears to be old. (And then update the Bloom Filter to include the new element).
Edward Loper
Edward, hmm - I agree that I was wrong, but it feels like you are making an incorrect assumption, although I can't quite put my finger on what it is :-)
sanity
+1  A: 

If you have a hash function that you trust, then you could maintain a hashset just like you would for the exact solution, but throw out any item whose hash value is outside of some small range. E.g., use a 32-bit hash, but only keep items where the first two bits of the hash are 0. Then multiply by the appropriate factor at the end to approximate the total number of unique elements.

Edward Loper