Artificial Neural Networks: Some Misconceptions (Part 3) - DZone AI
The learning algorithm of a neural network tries to optimize the neural network's weights until some stopping condition has been met. This condition is typically either when the error of the network reaches an acceptable level of accuracy on the training set, when the error of the network on the validation set begins to deteriorate, or when the specified computational budget has been exhausted. The most common learning algorithm for neural networks is back-propagation, an algorithm that uses stochastic gradient descent, which was discussed earlier on in this series. The are some problems with this approach. Adjusting all the weights at once can result in a significant movement of the neural network in weight space, the gradient descent algorithm is quite slow, and the gradient descent algorithm is susceptible to local minima.
Apr-24-2018, 09:21:19 GMT