According to an answer from here, artificial neural networks are obsoleted by Support Vector Machines, Gaussian Processes, generative and descriptive models. What is your opinion?
Neural networks are one method of "machine learning." Just because there are new technologies, doesn't mean the older ones are obsolete. There are quite a few applications for them, including risk assessment for financial businesses.
They're quite good at detecting patterns, so people still use them in applications that need that. I've found them useful for risk assessment myself, using them for determining whether a given customer would be a high risk for the company based on a large amount of previous training data. There may certainly be better methods for doing something like that, but I found a NN to be a perfectly acceptable solution, with good results.
Yes, they are. Neural networks' problems with getting stuck in local minima (i.e. finding a solution that's better than the one to the left, and better than the one to the right, and having no way of knowing that there's a far better solution a good distance off) are inherent to the methodology, and the effort required to even partially compensate for them is considerably greater than it takes to just use a methodology that works better.
From this guy's paper here: http://www.inference.phy.cam.ac.uk/mackay/BayesGP.html ('Gaussian Processes - A Replacement for Supervised Neural Networks?') he states
"The most interesting problems, the task of feature discovery for example, are not ones which Gaussian processes will solve. But maybe multilayer perceptrons can't solve them either."
However, Kidney magazine suggests that
"In conclusion, although we understand that for special problems the ANN may still yield reasonable results, we argue that in general (from a theoretical perspective) and in particular (for the considered case study) support vector machine indeed outperform ANN."
Finally: www.cs.umu.se/education/examina/Rapporter/MichalAntkowiak.pdf
The Fig. 4.3 presents a comparison of the best results achieved by each method. It appears that much better results in classification were obtained using ANN than SVM. It seems also that ANNs are more resistant to insufficient data amount, because even for small set of Melanoma Maligna pictures results were satisfactory. That cannot be said about SVM, which had a problem with classification of above mentioned disease and mislead it with Melanocytic Nevus.
So, like pretty much everything in CS -- it's a matter of trade-offs and not is this the "best" but the "best for your particular problem"
I think the phrase 'no longer fashionable' is more appropriate than 'obselete'. The fact is that the research community is just as susceptible to hype and fashion as any other community.
Neural networks were hyped a lot several years ago as one of the early AI technologies which was going to solve all the problems in the world. Neural networks have since experienced a backlash, partly because they are thought of as old technology that failed to live up to the hype, and partly because they are thought of as difficult to work with.
However, there is some very interesting newer research being done in 'deep learning' which, as far as I understand, is based on an efficient way of training neural networks with a lot of hidden layers. Some of the results being produced by this technique are very impressive.
Neural networks have been out of fashion for a while, but maybe it's time for a comeback?
Strange conclusion which reminds me an historical precedent, the perceptron's case (the perceptron is a simple kind of artificial neural network):
... in 1969, Minsky co-authored with Seymour Papert, Perceptrons: An Introduction to Computational Geometry. In this work they attacked the limitations of the perceptron.
They showed that the perceptron could only solve linearly separable functions. Of particular interest was the fact that the perceptron still could not solve the XOR and NXOR functions. Likewise, Minsky and Papert stated that the style of research being done on the perceptron was doomed to failure because of these limitations. This was, of course, Minsky’s equally ill-timed remark. As a result, very little research was done in the area until about the 1980’s§. ...
§ Minsky and Papert are two pioneers of AI, so their opinion was much considered in that time. This was the classic symbolic vs subsymbolic debate in Artificial Intelligence.
In fact such limitation was easy to overcome simply by adding more than one layer of nodes (artificial neurons).
The moral of the story is that a technology can overcome its limitations even with a modest improvement. Case in point (with a not so modest improvment) Jürgen Schmidhuber's and colleagues recent work on Recurrent Neural Networks (RNN):
... Early RNNs of the 1990s could not learn to look far back into the past. Their problems were first rigorously analyzed on Schmidhuber's RNN long time lag project by his former PhD student Hochreiter (1991). A feedback network called "Long Short-Term Memory" (LSTM, Neural Comp., 1997) overcomes the fundamental problems of traditional RNNs, and efficiently learns to solve many previously unlearnable tasks involving: ...
Well shallow neural networks are certainly less popular since methods like SVMs can be as effective (or more) with less tinkering.
However, neural networks are still very much active and relevant, especially deep neural networks, known as Deep Belief Networks (DBN). DBNs come in 2 flavors: convolutional, and restricted Boltzmann machines (RBM). Convolutional networks are typically used for vision (and I know virtually nothing more about them). DBNs built from several layers of RBMs are great at learning high-level features of data in an unsupervised fashion, autoencoding, semantic hashing, and yes, classifying.
The trick is that DBNs are pretrained before using back-propagation, which is typically slow and kinda useless past 2 or 3 layers.
Two great sources:
Relatively small neural networks of the type built to date might be considered (by some) to be unpromising and therefore obsolete. On the other hand, a neural network with around 100 billion nodes with about 100 trillion interconnections (ie, something on the scale of the human brain), might be surprisingly effective.