views:

222

answers:

7

I have need to do some cluster analysis on a set of 2 dimensional data (I may add extra dimensions along the way).

The analysis itself will form part of the data being fed into a visualisation, rather than the inputs into another process (e.g. Radial Basis Function Networks).

To this end, I'd like to find a set of clusters which primarily "looks right", rather than elucidating some hidden patterns.

My intuition is that k-means would be a good starting place for this, but that finding the right number of clusters to run the algorithm with would be problematic.

The problem I'm coming to is this:

How to determine the 'best' value for k such that the clusters formed are stable and visually verifiable?

Questions:

  • Assuming that this isn't NP-complete, what is the time complexity for finding a good k. (probably reported in number of times to run the k-means algorithm).
  • is k-means a good starting point for this type of problem? If so, what other approaches would you recommend. A specific example, backed by an anecdote/experience would be maxi-bon.
  • what short cuts/approximations would you recommend to increase the performance.
+1  A: 

You might look at papers on cluster validation. Here's one that is cited in papers that involve microarray analysis, which involves clustering genes with related expression levels.

One such technique is the Silhouette measure that evaluates how closely a labeled point is to its centroid. The general idea is that, if a point is assigned to one centroid but is still close to others, perhaps it was assigned to the wrong centroid. By counting these events across training sets and looking across various k-means clusterings, one looks for the k such that the labeled points overall fall into the "best" or minimally ambiguous arrangement.

It should be said that clustering is more of a data visualization and exploration technique. It can be difficult to elucidate with certainty that one clustering explains the data correctly, above all others. It's best to merge your clusterings with other relevant information. Is there something functional or otherwise informative about your data, such that you know some clusterings are impossible? This can reduce your solution space considerably.

Alex Reynolds
The plane is a geographical, i.e. continuous surface.
jamesh
+1  A: 

From your wikipedia link:

Regarding computational complexity, the k-means clustering problem is:

  • NP-hard in general Euclidean space d even for 2 clusters
  • NP-hard for a general number of clusters k even in the plane
  • If k and d are fixed, the problem can be exactly solved in time O(ndk+1 log n), where n is the number of entities to be clustered

Thus, a variety of heuristic algorithms are generally used.

That said, finding a good value of k is usually a heuristic process (i.e. you try a few and select the best).

I think k-means is a good starting point, it is simple and easy to implement (or copy). Only look further if you have serious performance problems.

If the set of points you want to cluster is exceptionally large a first order optimisation would be to randomly select a small subset, use that set to find your k-means.

jilles de wit
I got the WP link, thanks for the clippage. I'm not sure I understand your final paragraph.
jamesh
What I mean by the last paragraph is: If your dataset is very large, say 1,000,000 points, you could randomly select a subset of 10,000 and use that to find your k-means. This will cut down your computation time with a small negative effect on the accuracy of your k-means
jilles de wit
+2  A: 

In order to use k-means, you should know how many cluster there is. You can't try a naive meta-optimisation, since the more cluster you'll add (up to 1 cluster for each data point), the more it will brought you to over-fitting. You may look for some cluster validation methods and optimize the k hyperparameter with it but from my experience, it rarely work well. It's very costly too.

If I were you, I would do a PCA, eventually on polynomial space (take care of your available time) depending on what you know of your input, and cluster along the most representatives components.

More infos on your data set would be very helpful for a more precise answer.

Aszarsha
+2  A: 

Here's my approximate solution:

  1. Start with k=2.
  2. For a number of tries:
    1. Run the k-means algorithm to find k clusters.
    2. Find the mean square distance from the origin to the cluster centroids.
  3. Repeat the 2-3, to find a standard deviation of the distances. This is a proxy for the stability of the clusters.
  4. If stability of clusters for k < stability of clusters for k - 1 then return k - 1
  5. Increment k by 1.

The thesis behind this algorithm is that the number of sets of k clusters is small for "good" values of k.

If we can find a local optimum for this stability, or an optimal delta for the stability, then we can find a good set of clusters which cannot be improved by adding more clusters.

jamesh
Instead of arbitrary increments, I would feed the problem to a genetic algorithm and you could apply a similar logic to the implementation of a fitness function - in that case you won't get stuck in local optima
JohnIdol
For such a small search space, a GA seems a little intense. The two global optima are at k=1 and k=n (why I started with k=2), and the approach to k=n may be quite asymptotic.
jamesh
+5  A: 

For problems with an unknown number of clusters, agglomerative hierarchical clustering is often a better route than k-means.

Agglomerative clustering produces a tree structure, where the closer you are to the trunk, the fewer the number of clusters, so it's easy to scan through all numbers of clusters. The algorithm starts by assigning each point to its own cluster, and then repeatedly groups the two closest centroids. Keeping track of the grouping sequence allows an instant snapshot for any number of possible clusters. Therefore, it's often preferable to use this technique over k-means when you don't know how many groups you'll want.

There are other hierarchical clustering methods (see the paper suggested in Imran's comments). The primary advantage of an agglomerative approach is that there are many implementations out there, ready-made for your use.

tom10
This is a good approach. Also take a look at this paper: http://lib.stat.cmu.edu/s/mclust/tr329.pdf
Imran
+2  A: 

In a previous answer, I explained how Self-Organizing Maps (SOM) can be used in visual clustering.

Otherwise, there exist a variation of the K-Means algorithm called X-Means which is able to find the number of clusters by optimizing the Bayesian Information Criterion (BIC), in addition to solving the problem of scalability by using KD-trees.
Weka includes an implementation of X-Means along with many other clustering algorithm, all in an easy to use GUI tool.

Finally you might to refer to this page which discusses the Elbow Method among other techniques for determining the number of clusters in a dataset.

Amro
Good point, I'd forgotten about Kohonen maps.
jamesh
+1  A: 

Choosing the best K can be seen as a Model Selection problem. One possible approach is Minimum Description Length, which in this context means: You could store a table with all the points (in which case K=N). At the other extreme, you have K=1, and all the points are stored as their distances from a single centroid. This Section from Introduction to Information Retrieval by Manning and Schutze suggest minimising the Akaike Information Criterion as a heuristic for an optimal K.

Yuval F