Goto

Collaborating Authors

 Jabri, Marwan


WATTLE: A Trainable Gain Analogue VLSI Neural Network

Neural Information Processing Systems

This paper describes a low power analogue VLSI neural network called Wattle. Wattle is a 10:6:4 three layer perceptron with multiplying DACsynapses and on chip switched capacitor neurons fabricated in 1.2um CMOS.


WATTLE: A Trainable Gain Analogue VLSI Neural Network

Neural Information Processing Systems

This paper describes a low power analogue VLSI neural network called Wattle. Wattle is a 10:6:4 three layer perceptron with multiplying DAC synapses and on chip switched capacitor neurons fabricated in 1.2um CMOS.


Summed Weight Neuron Perturbation: An O(N) Improvement Over Weight Perturbation

Neural Information Processing Systems

The algorithm presented performs gradient descent on the weight space of an Artificial Neural Network (ANN), using a finite difference to approximate the gradient The method is novel in that it achieves a computational complexitysimilar to that of Node Perturbation, O(N3), but does not require access to the activity of hidden or internal neurons. This is possible due to a stochastic relation between perturbations at the weights and the neurons of an ANN. The algorithm is also similar to Weight Perturbation in that it is optimal in terms of hardware requirements whenused for the training ofVLSI implementations of ANN's.


Summed Weight Neuron Perturbation: An O(N) Improvement Over Weight Perturbation

Neural Information Processing Systems

The algorithm presented performs gradient descent on the weight space of an Artificial Neural Network (ANN), using a finite difference to approximate the gradient The method is novel in that it achieves a computational complexity similar to that of Node Perturbation, O(N3), but does not require access to the activity of hidden or internal neurons. This is possible due to a stochastic relation between perturbations at the weights and the neurons of an ANN. The algorithm is also similar to Weight Perturbation in that it is optimal in terms of hardware requirements when used for the training ofVLSI implementations of ANN's.