Industry
Analog Neural Networks as Decoders
Erlanson, Ruth, Abu-Mostafa, Yaser
In turn, KWTA networks can be used as decoders of a class of nonlinear error-correcting codes. By interconnecting such KWTA networks, we can construct decoders capable of decoding more powerful codes. We consider several families of interconnected KWTA networks, analyze their performance in terms of coding theory metrics, and consider the feasibility of embedding such networks in VLSI technologies.
Rapidly Adapting Artificial Neural Networks for Autonomous Navigation
Dean A. Pomerleau School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract The ALVINN (Autonomous Land Vehicle In a Neural Network) project addresses the problem of training artificial neural networks in real time to perform difficult perception tasks. ALVINN,is a back-propagation network that uses inputs from a video camera and an imaging laser rangefinder to drive the CMU Navlab, a modified Chevy van. This paper describes training techniques which allow ALVINN to learn in under 5 minutes to autonomously control the Navlab by watching a human driver's response to new situations. Using these techniques, ALVINN has been trained to drive in a variety of circumstances including single-lane paved and unpaved roads, multilane lined and unlined roads, and obstacle-ridden on-and off-road environments, at speeds of up to 20 miles per hour. 1 INTRODUCTION Previous trainable connectionist perception systems have often ignored important aspects of the form and content of available sensor data. Because of the assumed impracticality of training networks to perform realistic high level perception tasks, connectionist researchers have frequently restricted their task domains to either toy problems (e.g. the TC identification problem [11] [6]) or fixed low level operations (e.g.
Development and Spatial Structure of Cortical Feature Maps: A Model Study
Obermayer, Klaus, Ritter, Helge, Schulten, Klaus
Feature selective cells in the primary visual cortex of several species are organized in hierarchical topographic maps of stimulus features like "position in visual space", "orientation" and" ocular dominance". In order to understand and describe their spatial structure and their development, we investigate a self-organizing neural network model based on the feature map algorithm. The model explains map formation as a dimension-reducing mapping from a high-dimensional feature space onto a two-dimensional lattice, such that "similarity" between features (or feature combinations) is translated into "spatial proximity" between the corresponding feature selective cells. The model is able to reproduce several aspects of the spatial structure of cortical maps in the visual cortex. 1 Introduction Cortical maps are functionally defined structures of the cortex, which are characterized by an ordered spatial distribution of functionally specialized cells along the cortical surface. In the primary visual area(s) the response properties of these cells must be described by several independent features, and there is a strong tendency to map combinations of these features onto the cortical surface in a way that translates "similarity" into "spatial proximity" of the corresponding feature selective cells (see e.g.
On Stochastic Complexity and Admissible Models for Neural Network Classifiers
For a detailed rationale the reader is referred to the work of Rissanen (1984) or Wallace and Freeman (1987) and the references therein. Note that the Minimum Description Length (MDL) technique (as Rissanen's approach has become known) is implicitly related to Maximum A Posteriori (MAP) Bayesian estimation techniques if cast in the appropriate framework.
A Novel Approach to Prediction of the 3-Dimensional Structures of Protein Backbones by Neural Networks
Fredholm, Henrik, Bohr, Henrik, Bohr, Jakob, Brunak, Sรธren, Cotterill, Rodney M. J., Lautrup, Benny, Petersen, Steffen B.
Since Kendrew & Perutz solved the first protein structures, myoglobin and hemoglobin, and explained from the discovered structures how these proteins perform their function, it has been widely recognized that protein function is intimately linked with protein structure[l]. Within the last two decades X-ray crystallographers have solved the 3-dimensional (3D) structures of a steadily increasing number of proteins in the crystalline state, and recently 2D-NMR spectroscopy has emerged as an alternative method for small proteins in solution. Today approximately three hundred 3D structures have been solved by these methods, although only about half of them can be considered as truly different, and only around a hundred of them are solved at high resolution (that is, less than 2A). The number of protein sequences known today is well over 20,000, and this number seems to be growing at least one order of magnitude faster than the number of known 3D protein structures. Obviously, it is of great importance to develop tools that can predict structural aspects of proteins on the basis of knowledge acquired from known 3D structures.
VLSI Implementations of Learning and Memory Systems: A Review
ABSTRACT A large number of VLSI implementations of neural network models have been reported. The diversity of these implementations is noteworthy. This paper attempts to put a group of representative VLSI implementations in perspective by comparing and contrasting them. Design tradeoffs are discussed and some suggestions forthe direction of future implementation efforts are made. IMPLEMENTATION Changing the way information is represented can be beneficial.
Interaction Among Ocularity, Retinotopy and On-center/Off-center Pathways During Development
The development of projections from the retinas to the cortex is mathematically analyzed according to the previously proposed thermodynamic formulation of the self-organization of neural networks. Three types of submodality included in the visual afferent pathways are assumed in two models: model (A), in which the ocularity and retinotopy are considered separately, and model (B), in which on-center/off-center pathways are considered in addition to ocularity and retinotopy. Model (A) shows striped ocular dominance spatial patterns and, in ocular dominance histograms, reveals a dip in the binocular bin. Model (B) displays spatially modulated irregular patterns and shows single-peak behavior in the histograms. When we compare the simulated results with the observed results, it is evident that the ocular dominance spatial patterns and histograms for models (A) and (B) agree very closely with those seen in monkeys and cats.
Discrete Affine Wavelet Transforms For Anaylsis And Synthesis Of Feedfoward Neural Networks
Pati, Y. C., Krishnaprasad, P. S.
In this paper we show that discrete affine wavelet transforms can provide a tool for the analysis and synthesis of standard feedforward neural networks. It is shown that wavelet frames for L2(IR) can be constructed based upon sigmoids. The spatia-spectral localization property of wavelets can be exploited in defining the topology and determining the weights of a feedforward network. Training a network constructed using the synthesis procedure described here involves minimization of a convex cost functional and therefore avoids pitfalls inherent in standard backpropagation algorithms. Extension of these methods to L2(IRN) is also discussed.
Relaxation Networks for Large Supervised Learning Problems
Alspector, Joshua, Allen, Robert B., Jayakumar, Anthony, Zeppenfeld, Torsten, Meir, Ronny
Feedback connections are required so that the teacher signal on the output neurons can modify weights during supervised learning. Relaxation methods are needed for learning static patterns with full-time feedback connections. Feedback network learning techniques have not achieved wide popularity because of the still greater computational efficiency of back-propagation. We show by simulation that relaxation networks of the kind we are implementing in VLSI are capable of learning large problems just like back-propagation networks. A microchip incorporates deterministic mean-field theory learning as well as stochastic Boltzmann learning. A multiple-chip electronic system implementing these networks will make high-speed parallel learning in them feasible in the future.