views:

49

answers:

1

i plan to use neurodotnet for my phd thesis, but before that i just want to build some small solutions to get used to the dll structure. the first problem that i want to model using backward propagation is height-weight ratio. I have some height and weight data, i want to train my NN so that if i put in some weight then i should get correct height as a output. i have 1 input 1 hidden and 1 output layer. Now here is first of many things i cant get around :) 1. my height data is in form of 1.422, 1.5422 ... etc and the corresponding weight data is 90 95, but the NN takes the input as 0/1 or -1/1 and given the output in the same range. how to address this problem

+2  A: 

You have to normalize the data. If you don't know what the ranges for the real-world inputs will be then pick a sensible range that will cover all reasonable inputs. If the NN never sees inputs <0.1 and >0.9 I don't think it will be a problem.

charlieb
Thank you for your answer, by normalizing you mean that my input and outputs should be within (0.0-1.0) range? i have done that by modifying one the one XOR sample application that comes with neurodotnet, and what i understood from the output that if the output>0.5 it is taken as true and if less than 0.5 it is taken as false which is cleary not the case in my problem as i need discrete values
Yes normalization means mapping all values into the 0.0-1.0 range. You need the max value for the data set and the min value. Then you can apply the mapping by doing (val - min) / (max - min) for each val in your data set.Yes if you are looking for binary output >0.5 is considered true and <0.5 false however neural networks are capable of giving meaningful floating point outputs. The simple XOR case just uses binary though.You say you need discrete values. You can create a mapping where e.g. 0.0-0.25 = 1, 0.25-0.5 = 2, 0.5-0.75 = 3, 0.75-1.0 = 4 or whatever you need.
charlieb
Thank you so much for your help, the application is working fine now, and now i am on to modeling more complex problem, again thank you for your help, you gave the much needed push :)