views:

48

answers:

1

Hello,

Could someone please explain to me how to update the bias throughout backpropagation?

I've read quite a few books, but can't find bias updating!

I understand that bias is an extra input of 1 with a weight attached to it (for each neuron). There must be a formula.

Thank you,

@msw

Most interesting. Thank you, I think two good points are: 1. " The "universal approximation" property of multilayer perceptrons with most commonly-used hidden-layer activation functions does not hold if you omit the bias terms. But Hornik (1993) shows that a sufficient condition for the universal approximation property without biases is that no derivative of the activation function vanishes at the origin, which implies that with the usual sigmoid activation functions, a fixed nonzero bias term can be used instead of a trainable bias." 2. The bias terms can be learned just like other weights." So I will either add in a 'constant weight' or train this weight like all the others using gradient descent.

Am I understanding right?

+1  A: 

Train this weight like all the others using gradient descent

pberkes