views:

73

answers:

0

Hi,

I'm trying to add to the code for a single layer neural network which takes a bitmap as input and has 26 outputs for the likelihood of each letter in the alphabet.

The first question I have is regarding the single hidden layer that is being added. Am I correct in thinking that the hidden layer will have it's own set of output values and weights only? It doesn't need to have it's own bias'?

Can I also confirm that I'm thinking about the feedforward aspect correctly? Here's some pseudocode:

// input => hidden
for j in hiddenOutput.length:
    sum=inputs*hiddenWeights
    hiddenOutput[j] = activationFunction(sum)
// hidden => output
for j in output.length:
    sum=hiddenOutputs*weights
    output[j] = activationFunction(sum)

Assuming that is correct, would the training be something like this?

def train(input[], desired[]):
    iterate through output and determine errors[]
    update weights & bias accordingly
    iterate through hiddenOutput and determine hiddenErrors[]
    update hiddenWeights & (same bias?) accordingly

Thanks in advance for any help, I've read so many examples and tutorials and I'm still having trouble determining how to do everything correctly.