Industry
Real-time autonomous robot navigation using VLSI neural networks
Tarassenko, Lionel, Brownlow, Michael, Marshall, Gillian, Tombs, Jan, Murray, Alan
There have been very few demonstrations ofthe application ofVLSI neural networks to real world problems. Yet there are many signal processing, pattern recognition or optimization problems where a large number of competing hypotheses need to be explored in parallel, most often in real time. The massive parallelism of VLSI neural network devices, with one multiplier circuit per synapse, is ideally suited to such problems. In this paper, we present preliminary results from our design for a real time robot navigation system based on VLSI neural network modules.
Multi-Layer Perceptrons with B-Spline Receptive Field Functions
Lane, Stephen H., Flax, Marshall, Handelman, David, Gelfand, Jack
Multi-layer perceptrons are often slow to learn nonlinear functions with complex local structure due to the global nature of their function approximations. It is shown that standard multi-layer perceptrons are actually a special case of a more general network formulation that incorporates B-splines into the node computations. This allows novel spline network architectures to be developed that can combine the generalization capabilities and scaling properties of global multi-layer feedforward networks with the computational efficiency and learning speed of local computational paradigms. Simulation results are presented for the well known spiral problem of Weiland and of Lang and Witbrock to show the effectiveness of the Spline Net approach.
VLSI Implementations of Learning and Memory Systems: A Review
ABSTRACT A large number of VLSI implementationsof neural networkmodels have been reported. The diversityof these implementations is noteworthy. This paper attempts to put a group of representative VLSI implementations in perspective by comparing and contrasting them. IMPLEMENTATION Changing the way information is represented can be beneficial. For examplea change of representation can make information more compact for storage and transmission.
Signal Processing by Multiplexing and Demultiplexing in Neurons
The signal content of the codes encoded by a presynaptic neuron will be decoded by some other neurons postsynpatically. Neurons are often thought to be encoding a single type of 282 Signal Processing by Multiplexing and Demultiplexing in Neurons 283 codes. But there is evidence suggesting that neurons may encode more than one type of signals. One of the mechanisms for embedding multiple types of signals processed by a neuron is multiplexing. When the signals are multiplexed, they also need to be demultiplexed to extract the useful information transmitted by the neurons. Theoretical and experimental evidence of such multiplexing and demultiplexing scheme for signal processing by neurons will be given below.
Lg Depth Estimation and Ripple Fire Characterization Using Artificial Neural Networks
Perry, John L., Baumgardt, Douglas R.
This srudy has demonstrated how artificial neural networks (ANNs) can be used to characterize seismic sources using high-frequency regional seismic data. We have taken the novel approach of using ANNs as a research tool for obtaining seismic source information, specifically depth of focus for earthquakes and ripple-fire characteristics for economic blasts, rather than as just a feature classifier between earthquake and explosion populations. Overall, we have found that ANNs have potential applications to seismic event characterization and identification, beyond just as a feature classifier. In future studies, these techniques should be applied to actual data of regional seismic events recorded at the new regional seismic arrays. The results of this study indicates that an ANN should be evaluated as part of an operational seismic event identification system. 1 INTRODUCTION ANNs have usually been used as pattern matching algorithms, and recent studies have applied ANNs to standard classification between classes of earthquakes and explosions using wavefonn features (Dowla, et al, 1989), (Dysart and Pulli, 1990).
Discrete Affine Wavelet Transforms For Anaylsis And Synthesis Of Feedfoward Neural Networks
Pati, Y. C., Krishnaprasad, P. S.
In this paper we show that discrete affine wavelet transforms can provide a tool for the analysis and synthesis of standard feedforward neural networks. Itis shown that wavelet frames for L2(IR) can be constructed based upon sigmoids. The spatia-spectral localization property of wavelets can be exploited in defining the topology and determining the weights of a feedforward network. Training a network constructed using the synthesis proceduredescribed here involves minimization of a convex cost functional andtherefore avoids pitfalls inherent in standard backpropagation algorithms. Extension of these methods to L2(IRN) is also discussed. 1 INTRODUCTION Feedforward type neural network models constructed from empirical data have been found to display significant predictive power [6]. Mathematical justification in support ofsuch predictive power may be drawn from various density and approximation theorems [1, 2, 5].
Rapidly Adapting Artificial Neural Networks for Autonomous Navigation
Dean A. Pomerleau School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN,is a back-propagation network that uses inputs from a video camera and an imaging laser rangefinder to drive the CMU Navlab, a modified Chevy van. This paper describes training techniques which allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching a human driver's response to new situations. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden on-and off-road environments, at speeds of up to 20 miles per hour. 1 INTRODUCTION Previous trainable connectionist perception systems have often ignored important aspects of the form and content of available sensor data. Because of the assumed impracticality of training networks to perform realistic high level perception tasks, connectionist researchers have frequently restricted their task domains to either toy problems (e.g. the TC identification problem [11] [6]) or fixed low level operations (e.g.
Multi-Layer Perceptrons with B-Spline Receptive Field Functions
Lane, Stephen H., Flax, Marshall, Handelman, David, Gelfand, Jack
Multi-layer perceptrons are often slow to learn nonlinear functions with complex local structure due to the global nature of their function approximations. It is shown that standard multi-layer perceptrons are actually a special case of a more general network formulation that incorporates B-splines into the node computations. This allows novel spline network architectures to be developed that can combine the generalization capabilities and scaling properties of global multi-layer feedforward networks with the computational efficiency and learning speed of local computational paradigms. Simulation results are presented for the well known spiral problem of Weiland and of Lang and Witbrock to show the effectiveness of the Spline Net approach.
Navigating through Temporal Difference
Barto, Sutton and Watkins [2] introduced a grid task as a didactic example oftemporal difference planning and asynchronous dynamical pre gramming. Thispaper considers the effects of changing the coding of the input stimulus, and demonstrates that the self-supervised learning of a particular form of hidden unit representation improves performance.