Just to extend @mtrw answer, according to the documentation, training stops when any of these conditions occurs:
- The maximum number of epochs is reached:
net.trainParam.epochs
- The maximum amount of time is exceeded:
net.trainParam.time
- Performance is minimized to the goal:
net.trainParam.goal
- The performance gradient falls below min_grad:
net.trainParam.min_grad
- mu exceeds mu_max:
net.trainParam.mu_max
- Validation performance has increased more than max_fail times since
the last time it decreased (when using validation):
net.trainParam.max_fail
Epochs and time contraints allows to put an upper bound on the training duration.
Goal constraint stop the training when the performance (error) drops below it, and usually allows you to adjust the level of time/accuracy trade-off: less accurate results for faster execution.
This is similar to *min_grad* (gradient tells you the strength of the "descent") in that if the magnitude of the gradient is less than mingrad, training stops. It can be understood by the fact that if the error function is not changing by much, then we are reaching a plateau and we should probably stop training since we are not going to improve by much.
mu, *mu_dec*, and *mu_max* are used to control the weight updating process (backpropagation).
*max_fail* is usually used to avoid over-fitting, not so much for speedup.
My advice, set time and epochs to the maximum possible that your application constraints allow (otherwise the results will be poor). And in turn, you can control goal and *min_grad* to reach the level of speed/accuracy trade-off desired. Keep in mind that *max_fails* wont make you gain any time, since its mainly used to assure good generalization power.