Summed Weight Neuron Perturbation: An O(N) Improvement Over Weight Perturbation
–Neural Information Processing Systems
The algorithm presented performs gradient descent on the weight space of an Artificial Neural Network (ANN), using a finite difference to approximate the gradient The method is novel in that it achieves a computational complexity similar to that of Node Perturbation, O(N3), but does not require access to the activity of hidden or internal neurons. This is possible due to a stochastic relation between perturbations at the weights and the neurons of an ANN. The algorithm is also similar to Weight Perturbation in that it is optimal in terms of hardware requirements when used for the training ofVLSI implementations of ANN's.
Neural Information Processing Systems
Dec-31-1993
- Technology: