views:

271

answers:

3

Is it a good practice to use sigmoid or tanh output layers in Neural networks directly to estimate probabilities?

i.e the probability of given input to occur is the output of sigmoid function in the NN

EDIT
I wanted to use neural network to learn and predict the probability of a given input to occur.. You may consider the input as State1-Action-State2 tuple. Hence the output of NN is the probability that State2 happens when applying Action on State1..

I Hope that does clear things..

EDIT
When training NN, I do random Action on State1 and observe resultant State2; then teach NN that input State1-Action-State2 should result in output 1.0

+6  A: 

First, just a couple of small points on the conventional MLP lexicon (might help for internet searches, etc.): 'sigmoid' and 'tanh' are not 'output layers' but functions, usually referred to as "activation functions". The return value of the activation function is indeed the output from each layer, but they are not the output layer themselves (nor do they calculate probabilities).

Additionally, your question recites a choice between two "alternatives" ("sigmoid and tanh"), but they are not actually alternatives, rather the term 'sigmoidal function' is a generic/informal term for a class of functions, which includes the hyperbolic tangent ('tanh') that you refer to.

The term 'sigmoidal' is probably due to the characteristic shape of the function--the return (y) values are constrained between two asymptotic values regardless of the x value. The function output is usually normalized so that these two values are -1 and 1 (or 0 and 1). (This output behavior, by the way, is obviously inspired by the biological neuron which either fires (+1) or it doesn't (-1)). A look at the key properties of sigmoidal functions and you can see why they are ideally suited as activation functions in feed-forward, backpropagating neural networks: (i) real-valued and differentiable, (ii) having exactly one inflection point, and (iii) having a pair of horizontal asymptotes.

In turn, the sigmoidal function is one category of functions used as the activation function (aka "squashing function") in FF neural networks solved using backprop. During training or prediction, the weighted sum of the inputs (for a given layer, one layer at a time) is passed in as an argument to the activation function which returns the output for that layer. Another group of functions apparently used as the activation function is piecewise linear function. The step function is the binary variant of a PLF:

def step_fn(x) :
  if x <= 0 :
    y = 0
  if x > 0 :
    y = 1    

(On practical grounds, I doubt the step function is a plausible choice for the activation function, but perhaps it helps understand the purpose of the activation function in NN operation.)

I suppose there an unlimited number of possible activation functions, but in practice, you only see a handful; in fact just two account for the overwhelming majority of cases (both are sigmoidal). Here they are (in python) so you can experiment for yourself, given that the primary selection criterion is a practical one:

# logistic function
def sigmoid2(x) :
  return 1 / (1 + e**(-x))   

# hyperbolic tangent
def sigmoid1(x) :
  return math.tanh(x)

what are the factors to consider in selecting an activation function?

First the fuction has to give the desired behavior (arising from or as evidenced by sigmoidal shape). Second, the function must be differentiable. This is a requirement for backpropagation, which is the optimization technique used during training to 'fill in' the values of the hidden layers.

For instance, the derivative of the hyperbolic tangent is (in terms of the output, which is how it is usually written) :

def dsigmoid(y) :
  return 1.0 - y**2

Beyond those two requriements, what makes one function between than another is how efficiently it trains the network--i.e., which one causes convergence (reaching the local minimum error) in the fewest epochs?

#-------- Edit (see OP's comment below) ---------#

I am not quite sure i understood--sometimes it's difficult to communicate details of a NN, without the code, so i should probably just say that it's fine subject to this proviso: What you want the NN to predict must be the same as the dependent variable used during training. So for instance, if you train your NN using two states (e.g., 0, 1) as the single dependent variable (which is obviously missing from your testing/production data) then that's what your NN will return when run in "prediction mode" (post training, or with a competent weight matrix).

doug
+1 However, if he's directly estimating probabilities, I'd like to really highlight that **out of the box** 1/(1+e\*\*(-x)) will do the right thing and produce values between 0 and 1. To use tanh, he would need to slightly modify the activation function, e.g. **tanh(x)/2 + 0.5**
dmcer
@ dmcer: yes, good point.
doug
I have edit my question with the method of training the NN.. Please take a look and tell me if that is the right way to do it?
Betamoo
This may be nitpicking, but there is certainly something wrong with your implementation of the radial basis function. Should the 1 be an lx? Additionally, I don't believe the radial basis function is sigmoidal, as you state.
ajduff574
+2  A: 

There is one problem with this approach: if you have vectors from R^n and your network maps those vectors into the interval [0, 1], it will not be guaranteed that the network represents a valid probability density function, since the integral of the network is not guaranteed to equal 1.

E.g., a neural network could map any input form R^n to 1.0. But that is clearly not possible.

So the answer to your question is: no, you can't.

However, you can just say that your network never sees "unrealistic" code samples and thus ignore this fact. For a discussion of this (and also some more cool information on how to model PDFs with neural networks) see contrastive backprop.

bayer
+3  A: 

You should choose the right loss function to minimize. The squared error does not lead to the maximum likelihood hypothesis here. The squared error is derived from a model with Gaussian noise:

P(y|x,h) = k1 * e**-(k2 * (y - h(x))**2)

You estimate the probabilities directly. Your model is:

P(Y=1|x,h) = h(x)
P(Y=0|x,h) = 1 - h(x)

P(Y=1|x,h) is the probability that event Y=1 will happen after seeing x. You have to use one output for each possible observation.

The maximum likelihood hypothesis for your model is:

h_max_likelihood = argmax_h product(
    h(x)**y * (1-h(x))**(1-y) for x, y in examples)

This leads to the "cross entropy" loss function. See chapter 6 in Mitchell's Machine Learning for the loss function and its derivation.

Ivo Danihelka