Goto

Collaborating Authors

 Schwartz, Daniel B.


An Analog VLSI Splining Network

Neural Information Processing Systems

We have produced a VLSI circuit capable of learning to approximate arbitrary smooth of a single variable using a technique closely related to splines. The circuit effectively has 512 knots space on a uniform grid and has full support for learning. The circuit also can be used to approximate multi-variable functions as sum of splines. An interesting, and as of yet, nearly untapped set of applications for VLSI implementation of neural network learning systems can be found in adaptive control and nonlinear signal processing. In most such applications, the learning task consists of approximating a real function of a small number of continuous variables from discrete data points.


An Analog VLSI Splining Network

Neural Information Processing Systems

Waltham, MA 02254 Abstract We have produced a VLSI circuit capable of learning to approximate arbitrary smoothof a single variable using a technique closely related to splines. The circuit effectively has 512 knots space on a uniform grid and has full support for learning. The circuit also can be used to approximate multi-variable functions as sum of splines. An interesting, and as of yet, nearly untapped set of applications for VLSI implementation ofneural network learning systems can be found in adaptive control and nonlinear signal processing. In most such applications, the learning task consists of approximating a real function of a small number of continuous variables from discrete data points.


Adaptive Neural Networks Using MOS Charge Storage

Neural Information Processing Systems

However, to achieve the full power of a VLSI implementation of an adaptive algorithm, the learning operation must built into the circuit. We have fabricated and tested a circuit ideal for this purpose by connecting a pair of capacitors with a CCD like structure, allowing for variable size weight changes as well as a weight decay operation. A 2.51-' CMOS version achieves better than 10 bits of dynamic range in a 140/'


Adaptive Neural Networks Using MOS Charge Storage

Neural Information Processing Systems

However, to achieve the full power of a VLSI implementation of an adaptive algorithm, the learning operation must built into the circuit. We have fabricated and tested a circuit ideal for this purpose by connecting a pair of capacitors with a CCD like structure, allowing for variable size weight changes as well as a weight decay operation. A 2.51-' CMOS version achieves better than 10 bits of dynamic range in a 140/'


Microelectronic Implementations of Connectionist Neural Networks

Neural Information Processing Systems

Three chip designs are described: a hybrid digital/analog programmable connection matrix, an analog connection matrix with adjustable connection strengths, and a digital pipelined best-match chip. The common feature of the designs is the distribution of arithmetic processing power amongst the data storage to minimize data movement.


Microelectronic Implementations of Connectionist Neural Networks

Neural Information Processing Systems

It is clear that conventional computers lag far behind organic computers when it comes to dealing with very large data rates in problems such as computer vision and speech recognition.