views:

349

answers:

3

I am using

net = newfit(in,out,lag(j),{'tansig','tansig'});

to generate a new neural network. The default value of the number of validation checks is 6.

I am training a lot of networks and this is taking a lot of time. I guess it doesn't matter if my results are a bit less accurate if they can be made considerably faster.

How can I train faster?

  • I believe one of the ways might be to reduce the value of the number of validation checks. How can I do that (in code, not using GUI)
  • Is there some other way to increase speed.

As I said, the increase in speed may be at a little loss of accuracy.

+1  A: 

(Disclaimer: I don't have the neural network toolbox, so I'm only extrapolating from the Mathworks documentation)

It looks from your input parameters like you're using TRAINLM. According to the documentation, you can set the net.trainParam.max_fail parameter to change the validation checks.

You can set the initial mu value, as well as the increment and decrement factors. But this would require some insight into the expected answer and performance of the search.

For a more blunt approach, you can also control the maximum number of iterations by setting the net.trainParam.epochs parameter to something less than its default 100. You might also set the net.trainParam.time parameter to limit the number of seconds.

You should probably set net.trainParam.show to NaN to skip any displays.

mtrw
+4  A: 

Just to extend @mtrw answer, according to the documentation, training stops when any of these conditions occurs:

  • The maximum number of epochs is reached: net.trainParam.epochs
  • The maximum amount of time is exceeded: net.trainParam.time
  • Performance is minimized to the goal: net.trainParam.goal
  • The performance gradient falls below min_grad: net.trainParam.min_grad
  • mu exceeds mu_max: net.trainParam.mu_max
  • Validation performance has increased more than max_fail times since the last time it decreased (when using validation): net.trainParam.max_fail

Epochs and time contraints allows to put an upper bound on the training duration.

Goal constraint stop the training when the performance (error) drops below it, and usually allows you to adjust the level of time/accuracy trade-off: less accurate results for faster execution.

This is similar to *min_grad* (gradient tells you the strength of the "descent") in that if the magnitude of the gradient is less than mingrad, training stops. It can be understood by the fact that if the error function is not changing by much, then we are reaching a plateau and we should probably stop training since we are not going to improve by much.

mu, *mu_dec*, and *mu_max* are used to control the weight updating process (backpropagation).

*max_fail* is usually used to avoid over-fitting, not so much for speedup.

My advice, set time and epochs to the maximum possible that your application constraints allow (otherwise the results will be poor). And in turn, you can control goal and *min_grad* to reach the level of speed/accuracy trade-off desired. Keep in mind that *max_fails* wont make you gain any time, since its mainly used to assure good generalization power.

Amro
A: 

Neural nets are treated as objects in MATLAB. To access any parameter before (or after) training, you need to access the network's properties using the . operator.

In addition to mtrw's and Amro's answers, make MATLAB's Neural Network Toolbox documentation your new best friend. It will usually explain things in much better detail.

Zaid