views:

781

answers:

4

Are there any algorithms that can help with hierarchical clustering? Google's map-reduce has only an example of k-clustering. In case of hierarchical clustering, I'm not sure how it's possible to divide the work between nodes. Other resource that I found is: http://issues.apache.org/jira/browse/MAHOUT-19 But it's not apparent, which algorithms are used.

A: 

You could look at some of the work being done with Self-Organizing maps (Kohonen's neural network method)... the guys at Vienna University of Technology have done some work on distributed calculation of their growing hierarchical map algorithm.

This is a little on the edge of your clustering question, so it may not help, but I can't think of anything closer ;)

Jaykul
+1  A: 

Clark Olson reviews several distributed algorithms for hierarchical clustering:

C. F. Olson. "Parallel Algorithms for Hierarchical Clustering." Parallel Computing, 21:1313-1325, 1995, doi:10.1016/0167-8191(95)00017-I.

Parunak et al. describe an algorithm inspired by how ants sort their nests:

H. Van Dyke Parunak, Richard Rohwer, Theodore C. Belding,and Sven Brueckner: "Dynamic Decentralized Any-Time Hierarchical Clustering." In Proc. 4th International Workshop on Engineering Self-Organising Systems (ESOA), 2006, doi:10.1007/978-3-540-69868-5

Vebjorn Ljosa
+2  A: 

First, you have to decide if you're going to build your hierarchy bottom-up or top-down.

Bottom-up is called Hierarchical agglomerative clustering (unfortunate abbreviation). Here's a simple, well-documented algorithm: http://nlp.stanford.edu/IR-book/html/htmledition/hierarchical-agglomerative-clustering-1.html. Another, less-documented example in Python: http://www.noosfactory.com/blog/2006/05/03/hierarchical-agglomerative-clustering-in-python/.

Distributing a bottom-up algorithm is tricky because each distributed process needs the entire dataset to make choices about appropriate clusters. It also needs a list of clusters at its current level so it doesn't add a data point to more than one cluster at the same level.

Top-down hierarchy construction is called Divisive clustering. K-means is one option to decide how to split your hierarchy's nodes. This paper looks at K-means and Principal Direction Divisive Partitioning (PDDP) for node splitting: http://www.siam.org/meetings/sdm02/proceedings/sdm02-18.pdf. In the end, you just need to split each parent node into relatively well-balanced child nodes.

A top-down approach is easier to distribute. After your first node split, each node created can be shipped to a distributed process to be split again and so on... Each distributed process needs only to be aware of the subset of the dataset it is splitting. Only the parent process is aware of the full dataset.

In addition, each split could be performed in parallel. Here's an example for k-means: http://www-users.cs.umn.edu/~mnjoshi/PKMeans.pdf.

Corbin March
A: 

Check out this very readable if a bit dated review by Olson (1995). Most papers since then require a fee to access. :-)

If you use R, I recommend trying pvclust which achieves parallelism using snow, another R module.

bubaker