views:

482

answers:

5

I am trying to get a feel for the difference between the various classes of machine-learning algorithms.

I understand that the implementations of evolutionary algorithms are quite different from the implementations of neural networks.

However, they both seem to be geared at determining a correlation between inputs and outputs from a potentially noisy set of training/historical data.

From a qualitative perspective, are there problem domains that are better targets for neural networks as opposed to evolutionary algorithms?

I've skimmed some articles that suggest using them in a complementary fashion. Is there a decent example of a use case for that?

Thanks

+1  A: 

Evolutionary algorithms are really slow because they just follow a random path. Neural networks are generally faster, because they basically employ gradient descent within a function space over certain parameters. Generally, neural networks is a last resort when other methods don't work.

gersh
Can you elaborate on "gradient descent within a function space" in layman's terms? Does that just mean neural networks converge on potential solutions faster by using a more sophisticated feedback mechanism as opposed to brute force?
Joe Holloway
That really depends on the problem domain and potential epistasis of the parameters in the solution space.
Ryan
More sophisticated feedback is correct. A function space is just a bunch of functions; for example, f(x) = a * x for different values of a is a function space. Gradient descent here involves evaluating a particular function, taking a 'derivative' and tweaking the funct in the correct direction.
bsdfish
Evolutionary algorithms do use randomness but this doesn't make them intrinsically any more slow. They may be slower for some functions but it also makes them proportionally less susceptible to getting stuck in local maxima, which is a desirable property.
Kylotan
"they just follow a random path". I don't think that is true. You could say "they randomly sample around their path". Regardless, that doesn't make them intrinsically slow, being no different from any stochastic sampling method in that respect.
Stewart
+3  A: 

Problems that require "intuition" are better suited to ANNs, for example hand writing recognition. You train a neural network with a huge amount of input and rate it until you're done (this takes a long time), but afterwards you have a blackbox algorithm/system that can "guess" the hand writing, so you keep your little brain and use it as a module for many years or something. Because training a quality ANN for a complex problem can take months I'm worst case, and luck.

Most other evolutionary algorithms "calculate" an adhoc solution on the spot, in a sort of hill climbing pattern.

Also as pointed out in another answer, during runtime an ANN can "guess" faster than most other evolutionary algorithms can "calculate". However one must be careful, since the ANN is just "guessing" an it might be wrong.

Robert Gould
+6  A: 

Here is the deal: in machine learning problems, you typically have two components:

a) The model (function class, etc)

b) Methods of fitting the model (optimizaiton algorithms)

Neural networks are a model: given a layout and a setting of weights, the neural net produces some output. There exist some canonical methods of fitting neural nets, such as backpropagation, contrastive divergence, etc. However, the big point of neural networks is that if someone gave you the 'right' weights, you'd do well on the problem.

Evolutionary algorithms address the second part -- fitting the model. Again, there are some canonical models that go with evolutionary algorithms: for example, evolutionary programming typically tries to optimize over all programs of a particular type. However, EAs are essentially a way of finding the right parameter values for a particular model. Usually, you write your model parameters in such a way that the crossover operation is a reasonable thing to do and turn the EA crank to get a reasonable setting of parameters out.

Now, you could, for example, use evolutionary algorithms to train a neural network and I'm sure it's been done. However, the critical bit that EA require to work is that the crossover operation must be a reasonable thing to do -- by taking part of the parameters from one reasonable setting and the rest from another reasonable setting, you'll often end up with an even better parameter setting. Most times EA is used, this is not the case and it ends up being something like simulated annealing, only more confusing and inefficient.

bsdfish
+1  A: 

In terms of problem domains, I compare artificial neural networks trained by backpropagation to an evolutionary algorithm.

An evolutionary algorithm deploys a randomized beamsearch, that means your evolutionary operators develop candidates to be tested and compared by their fitness. Those operators are usually non deterministic and you can design them so they can both find candidates in close proximity and candidates that are further away in the parameter space to overcome the problem of getting stuck in local optima.

However the success of a EA approach greatly depends on the model you develop, which is a tradeoff between high expression potential (you might overfit) and generality (the model might not be able to express the target function).

Because neural networks usually are multilayered the parameter space is not convex and contains local optima, the gradient descent algorithms might get stuck in. The gradient descent is a deterministic algorithm, that searches through close proximity. That's why neural networks usually are randomly initialised and why you should train many more than one model.

Moreover you know each hidden node in a neural network defines a hyperplane you can design a neural network so it fits your problem well. There are some techniques to prevent neural networks from overfitting.

All in all, neural networks might be trained fast and get reasonable results with few efford (just try some parameters). In theory a neural network that is large enough is able to approximate every target function, which on the other side makes it prone to overfitting. Evolutionary algorithms require you to make a lot of design choices to get good results, the hardest probably being which model to optimise. But EA are able to search through very complex problem spaces (in a manner you define) and get good results quickly. AEs even can stay successful when the problem (the target function) is changing over time.

Tom Mitchell's Machine Learning Book: http://www.cs.cmu.edu/~tom/mlbook.html

A: 

Hi, I see that you have problems with algorithms. Maybe this free book help you. It is free to download here: http://www.intechopen.com/books/show/title/advances_in_evolutionary_algorithms
Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field.

Hanibal Lecter