views:

254

answers:

4

I'm starting neural networks, currently following mostly D. Kriesel's tutorial. Right off the beginning it introduces at least three (different?) learning rules (Hebbian, delta rule, backpropagation) concerning supervised learning.

I might be missing something, but if the goal is merely to minimize the error, why not just apply gradient descent over Error(entire_set_of_weights)?

Edit: I must admit the answers still confuse me. It would be helpful if one could point out the actual difference between those methods, and the difference between them and straight gradient descent.

To stress it, these learning rules seem to take the layered structure of the network into account. On the other hand, finding the minimum of Error(W) for the entire set of weights completely ignores it. How does that fit in?

+2  A: 

First off, note that "backpropagation" simply means that you apply the delta rule on each layer from output back to input so it's not a separate rule.

As for why not a simple gradient descent, well, the delta rule is basically gradient descent. However, it tends to overfit the training data and doesn't generalize as efficiently as techniques which don't try to decay the error margin to zero. This makes sense because "error" here simply means the difference between our samples and the output - they are not guaranteed to accurately represent all possible inputs.

Max Shawabkeh
+8  A: 

One question is how to apportion the "blame" for an error. The classic Delta Rule or LMS rule is essentially gradient descent. When you apply Delta Rule to a multilayer network, you get backprop. Other rules have been created for various reasons, including the desire for faster convergence, non-supervised learning, temporal questions, models that are believed to be closer to biology, etc.

On your specific question of "why not just gradient descent?" Gradient descent may work for some problems, but many problems have local minima, which naive gradient descent will get stuck in. The initial response to that is to add a "momentum" term, so that you might "roll out" of a local minimum; that's pretty much the classic backprop algorithm.

Larry OBrien
A: 

Backpropagation and naive gradient descent also differ in computational efficiency. Backprop is basically taking the networks structure into account and for each weight only calculates the actually needed parts.

The derivative of the error with respects to the weights is splitted via the chainrule into: ∂E/∂W = ∂E/∂A * ∂A/∂W. A is the activations of particular units. In most cases, the derivatives will be zero because W is sparse due to the networks topology. With backprop, you get the learning rules on how to ignore those parts of the gradient.

So, from a mathematical perspective, backprop is not that exciting.

bayer
A: 

Hi,

there may be problems which for example make backprop run into local minima. Furthermore, just as an example, you can't adjust the topology with backprop. There are also cool learning methods using nature-inspired metaheuristics (for instance, evolutionary strategies) that enable adjusting weight AND topology (even recurrent ones) simultaneously. Probably, I will add one or more chapters to cover them, too.

There is also a discussion function right on the download page of the manuscript - if you find other hazzles that you don't like about the manuscript, feel free to add them to the page so I can change things in the next edition.

Greetz, David (Kriesel ;-) )

David