In k-means and KSOM (Kohonen's Self Organizing Map), which one gives the better retrieval performance? And how to calculate that performance?
You need to precisely define your proposed use. It's very likely that any two alternative methods will represent trades-off, for some kinds of usage one will be faster than the other, or one will give greater precision than the other. This is pretty much tyhe challenge with any computer systems, published benchmarks cannot reliably be transferred to specific applications, you very often need to test using your own data and patterns of usage.
What's worse, you often find that quite small changes in requriements make significant changes in realtive performance.
So I'm afraid that you need to perform your own benchmarking and testing for your particular applications.
I think you should better highlight your usage (in terms of shape of data, number of elements, and priors you can know about the data before running clustering techniques). K-means is a very light and fast algorithm but with major drawbacks:
- Initialization: better results come from random cluster centroids, as the algorithm itself doesn't contain any "local minima avoidance" rule.
- Number of clusters: you should know in advance how many cluster you're going to map onto data
- No dependency on "shape" of clusters: K-means aim is to balance the size of the partitions in the space, and in literature implementation there's no way (almost) to tweak the flow w.r.t other parameters (second order stats, measures of compactness and so on).
On the other hand, SOM (or KSOM as you name it) is mostly used for taxonomies or for subdivisions in spaces with strong measures of fitness, and can take advantage of more structured priors than K-Means. You can select your own kernel function to impose constraints on network's shape and many other advanced practices that should deserve more room to be described than just a couple of lines. Drawback: training stage, not as fast as K-Means, unuseful in certain domains (when kernel function doesn't approximate well local data dispersion).
Hope these can help you.