views:

421

answers:

3

Are there any data mining algorithms comparisons? Comparisons in terms of performance, accuracy and the required amount of data for generating the robust model. It seems that ensemble learning algorithms like bagging and boosting are considered to be the most accurate at this moment. I don't have any specific problem to solve. It's just a theoretical question.

+4  A: 

You should search the web for survey papers on Data Mining.

Here is one: Top Ten Algorithms in Data Mining, which gives a ranking instead of a side by side. (It might have that though, I haven't gone through the paper).

Moron
+2  A: 

It is very difficult to compare machine learning algorithms in general in terms of robustness and accuracy. However one can study some of their pros and cons. I consider below a few of the most well known machine learning algorithms (this is in no way a complete account of things, just my opinion):

Decision trees: most prominently the C4.5 algorithm. They have the advantage of producing an easily interpreted model. They are however susceptible to overfitting. Many variants exist.

Bayesian Networks have strong statistical roots. They are especially useful in domains where inferencing is done over incomplete data.

Artificial Neural Networks are widely used and powerful technique. In theory they are able to approximate any arbitrary function. However they require tuning a large number of parameters (network structure, number of nodes, activation functions, ..). Also they have the disadvantage of working as a black box (difficult to interpret model)

Support vector machine are perhaps considered one of the most powerful techniques. Using the famous kernel trick, in theory one can always achieve 100% separability. Unlike ANN they seek to optimize a uniquely solvable problem (no local minimas). They can however be computationally intensive and difficult to apply to large datasets. SVMs are definitely an open research area.

Then there is a class of meta-learning algorithms like the ensemble learning techniques such as bagging, boosting, stacking, etc... They are not in themselves complete but rather used as ways of improving and combining other algorithms.

I should mention in the end that no algorithm is better than another in general, and that the decision of which to choose heavily depends on the domain we are in, and the data and how it is preprocessed among many other factors..

Amro
Agreed on domain dependence. I believe "No free lunch theorem" are the magic words here.
mcdowella
+2  A: 

ROC curves have been proved useful for the evaluation of machine learning techniques and particularly in comparing and evaluating different classification algorithms. You may find helpful this introduction to ROC analysis.

gd047