A Parallel Gradient Descent Method for Learning in Analog VLSI Neural Networks

Alspector, J., Meir, R., Yuhas, B., Jayakumar, A., Lippe, D.

Neural Information Processing Systems 

Typical methods for gradient descent in neural network learning involve calculation of derivatives based on a detailed knowledge of the network model. This requires extensive, time consuming calculations for each pattern presentationand high precision that makes it difficult to implement in VLSI. We present here a perturbation technique that measures, not calculates, the gradient. Since the technique uses the actual network as a measuring device, errors in modeling neuron activation and synaptic weights do not cause errors in gradient descent. The method is parallel in nature and easy to implement in VLSI.

Similar Docs  Excel Report  more

TitleSimilaritySource
None found