views:

125

answers:

3

I've been looking at face detection lately, and a lot of the literature states their outputs have a range. How is this possible? I've created my own network and it only seems to be outputting either -1 or 1. Is this because I'm using the Tanh activation function? I want the values to output from, say, 0 to 1 in a range, rather than a binary output, so I can see how "strong" it thinks the output is actually a face. Thanks.

A: 

Yes... your activation function determines your values. If you don't put an activation function on our output neurons, then it will simply output the sum of the values... that case would give you the actual error between your nonuniform output and nonuniform expected values.

Of course if you normalize your expected values, then you can keep your activation function.

Lirik
A: 

OK so I think what was going on was that the Tanh function reaches 1 too soon. I've changed to a Sigmoid activation function now on the output layer and I'm getting a lot more varied answers! :) Great. Thanks.

Harry
+1  A: 

Your problem might be the tanh's input range. Note that sigmoid works a lot like tanh - it can be easily overloaded by a big number. sigmoid(20) is almost = 1 and sigmoid(-20) is 0 Try to normalize the input of the input layer first, to have smaller numbers in the hidden layer(s), and in the output layer as well.

Thanks, didn't see your answer when I posted mine. I didn't know that was WHY it didn't work though, so cheers for that too.
Harry
Sorry, out of interest, to what sort of level should the inputs be normalized to? I was using -0.5 to 0.5. Apparently that was fine with Tanh, but it doesnt like that with Sigmoid.
Harry