A Learning Analog Neural Network Chip with Continuous-Time Recurrent Dynamics
–Neural Information Processing Systems
The recurrent network, containing six continuous-time analog neurons and 42 free parameters (connection strengths and thresholds), is trained to generate time-varying outputs approximating given periodic signals presented to the network. The chip implements a stochastic perturbative algorithm, which observes the error gradient along random directions in the parameter space for error-descent learning. In addition to the integrated learning functions and the generation of pseudo-random perturbations, the chip provides for teacher forcing and long-term storage of the volatile parameters. The network learns a 1 kHz circular trajectory in 100 sec. The chip occupies 2mm x 2mm in a 2JLm CMOS process, and dissipates 1.2 m W. 1 Introduction Exact gradient-descent algorithms for supervised learning in dynamic recurrent networks [1-3] are fairly complex and do not provide for a scalable implementation in a standard 2-D VLSI process. We have implemented a fairly simple and scalable ·Present address: Johns Hopkins University, ECE Dept., Baltimore MD 21218-2686.
Neural Information Processing Systems
Dec-31-1994
- Country:
- North America > United States
- California > San Francisco County
- San Francisco (0.14)
- Maryland > Baltimore (0.24)
- California > San Francisco County
- North America > United States
- Industry:
- Semiconductors & Electronics (0.36)
- Technology: