views:

73

answers:

2

Hi Guys,

I want to incrementally cluster text documents reading them as data streams but there seems to be a problem. Most of the term weighting options are based on vector space model using TF-IDF as the weight of a feature. However, in our case IDF of an existing attribute changes with every new data point and hence previous clustering does not remain valid anymore and hence any popular algorithms like CluStream, CURE, BIRCH cannot be applied which assumes fixed dimensional static data. Can anyone redirect me to any existing research related to this or give suggestions? Thanks !

+1  A: 

Here's an idea off the top of my head:

What's your input data like? I'm guessing it's at least similarly themed, so you could start with a base phrase dictionary and use that for idf. Apache Lucene is a great indexing engine. Since you have a base dictionary, you can run kmeans or whatever you'd like. As documents come in, you'll have to rebuild the dictionary at some frequency (which can be off-loaded to another thread/machine/etc) and then re-cluster.

With the data indexed in a high-performance, flexible engine like Lucene, you could run queries even as new documents are being indexed. I bet if you do some research on different clustering algorithms you'll find some good ideas.

Some interesting paper/links:

  1. http://en.wikipedia.org/wiki/Document_classification
  2. http://www.scholarpedia.org/article/Text_categorization
  3. http://en.wikipedia.org/wiki/Naive_Bayes_classifier

Without more information, I can't see why you couldn't re-cluster every once in a while. You might wanna take a look at some of the recommender systems already out there.

The Alchemist
+1  A: 

Have you looked at

TF-ICF: A New Term Weighting Scheme for Clustering Dynamic Data Streams

dunelmtech
looks like something useful, I'll look at this one..and update here. Thanks.