Goto

Collaborating Authors

 Information Technology


The Effects of Circuit Integration on a Feature Map Vector Quantizer

Neural Information Processing Systems

The effects of parameter modifications imposed by hardware constraints on a self-organizing feature map algorithm were examined. Performance was measured by the error rate of a speech recognition system which included this algorithm as part of the front-end processing. System parameters which were varied included weight (connection strength) quantization, adap tation quantization, distance measures and circuit approximations which include device characteristics and process variability. Experiments using the TI isolated word database for 16 speakers demonstrated degradation in performance when weight quantization fell below 8 bits. The competitive nature of the algorithm rela..xes constraints on uniformity and linearity which makes it an excellent candidate for a fully analog circuit implementation. Prototype circuits have been fabricated and characterized following the constraints established through the simulation efforts. 1 Introduction The self-organizing feature map algorithm developed by Kohonen [Kohonen, 1988] readily lends itself to the task of vector quantization for use in such areas as speech recognition.


Neural Implementation of Motivated Behavior: Feeding in an Artificial Insect

Neural Information Processing Systems

Most complex behaviors appear to be governed by internal motivational states or drives that modify an animal's responses to its environment. It is therefore of considerable interest to understand the neural basis of these motivational states. Drawing upon work on the neural basis of feeding in the marine mollusc Aplysia, we have developed a heterogeneous artificial neural network for controlling the feeding behavior of a simulated insect. We demonstrate that feeding in this artificial insect shares many characteristics with the motivated behavior of natural animals. 1 INTRODUCTION While an animal's external environment certainly plays an extremely important role in shaping its actions, the behavior of even simpler animals is by no means solely reactive. The response of an animal to food, for example, cannot be explained only in terms of the physical stimuli involved. On two different occasions, the very same animal may behave in completely different ways when presented with seemingly identical pieces of food (e.g.


Dataflow Architectures: Flexible Platforms for Neural Network Simulation

Neural Information Processing Systems

Dataflow architectures are general computation engines optimized for the execution of fme-grain parallel algorithms. Neural networks can be simulated on these systems with certain advantages. In this paper, we review dataflow architectures, examine neural network simulation performance on a new generation dataflow machine, compare that performance to other simulation alternatives, and discuss the benefits and drawbacks of the dataflow approach.


An Efficient Implementation of the Back-propagation Algorithm on the Connection Machine CM-2

Neural Information Processing Systems

In this paper, we present a novel implementation of the widely used Back-propagation neural net learning algorithm on the Connection Machine CM-2 - a general purpose, massively parallel computer with a hypercube topology. This implementation runs at about 180 million interconnections per second (IPS) on a 64K processor CM-2. The main interprocessor communication operation used is 2D nearest neighbor communication. The techniques developed here can be easily extended to implement other algorithms for layered neural nets on the CM-2, or on other massively parallel computers which have 2D or higher degree connections among their processors. 1 Introduction High-speed simulation of large artificial neural nets has become an important tool for solving real world problems and for studying the dynamic behavior of large populations of interconnected processing elements [3, 2]. This work is intended to provide such a simulation tool for a widely used neural net learning algorithm - the Back-propagation (BP) algorithm.[7] The hardware we have used is the Connection Machine CM-2.2


Bayesian Inference of Regular Grammar and Markov Source Models

Neural Information Processing Systems

In this paper we develop a Bayes criterion which includes the Rissanen complexity, for inferring regular grammar models. We develop two methods for regular grammar Bayesian inference. The fIrst method is based on treating the regular grammar as a I-dimensional Markov source, and the second is based on the combinatoric characteristics of the regular grammar itself. We apply the resulting Bayes criteria to a particular example in order to show the efficiency of each method.


Learning in Higher-Order "Artificial Dendritic Trees

Neural Information Processing Systems

The computational territory between the linearly summing McCulloch-Pitts neuron and the nonlinear differential equations of Hodgkin & Huxley is relatively sparsely populated. Connectionists use variants of the former and computational neuroscientists struggle with the exploding parameter spaces provided by the latter. However, evidence from biophysical simulations suggests that the voltage transfer properties of synapses, spines and dendritic membranes involve many detailed nonlinear interactions, not just a squashing function at the cell body. Real neurons may indeed be higher-order nets. For the computationally-minded, higher order interactions means, first of all, quadratic terms. This contribution presents a simple learning principle for a binary tree with a logistic/quadratic transfer function at each node. These functions, though highly nested, are shown to be capable of changing their shape in concert. The resulting tree structure receives inputs at its leaves, and outputs an estimate of the probability that the input pattern is a member of one of two classes at the top.


HMM Speech Recognition with Neural Net Discrimination

Neural Information Processing Systems

Two approaches were explored which integrate neural net classifiers with Hidden Markov Model (HMM) speech recognizers. Both attempt to improve speech pattern discrimination while retaining the temporal processing advantages of HMMs. One approach used neural nets to provide second-stage discrimination following an HMM recognizer. On a small vocabulary task, Radial Basis Function (RBF) and back-propagation neural nets reduced the error rate substantially (from 7.9% to 4.2% for the RBF classifier). In a larger vocabulary task, neural net classifiers did not reduce the error rate. They, however, outperformed Gaussian, Gaussian mixture, and k nearest neighbor (KNN) classifiers. In another approach, neural nets functioned as low-level acoustic-phonetic feature extractors. When classifying phonemes based on single 10 msec.


Real-Time Computer Vision and Robotics Using Analog VLSI Circuits

Neural Information Processing Systems

The long-term goal of our laboratory is the development of analog resistive network-based VLSI implementations of early and intermediate vision algorithms. We demonstrate an experimental circuit for smoothing and segmenting noisy and sparse depth data using the resistive fuse and a 1-D edge-detection circuit for computing zero-crossings using two resistive grids with different spaceconstants. To demonstrate the robustness of our algorithms and of the fabricated analog CMOS VLSI chips, we are mounting these circuits onto small mobile vehicles operating in a real-time, laboratory environment.


Operational Fault Tolerance of CMAC Networks

Neural Information Processing Systems

The performance sensitivity of Albus' CMAC network was studied for the scenario in which faults are introduced into the adjustable weights after training has been accomplished. It was found that fault sensitivity was reduced with increased generalization when "loss of weight" faults were considered, but sensitivity was increased for "saturated weight" faults. 1 INTRODUCTION Fault-tolerance is often cited as an inherent property of neural networks, and is thought by many to be a natural consequence of "massively parallel" computational architectures. Numerous anecdotal reports of fault-tolerance experiments, primarily in pattern classification tasks, abound in the literature. However, there has been surprisingly little rigorous investigation of the fault-tolerance properties of various network architectures in other application areas. In this paper we investigate the fault-tolerance of the CMAC (Cerebellar Model Arithmetic Computer) network [Albus 1975] in a systematic manner. CMAC networks have attracted much recent attention because of their successful application in robotic manipulator control [Ersu 1984, Miller 1986, Lane 1988].


Dataflow Architectures: Flexible Platforms for Neural Network Simulation

Neural Information Processing Systems

Dataflow architectures are general computation engines optimized for the execution of fme-grain parallel algorithms. Neural networks can be simulated on these systems with certain advantages. In this paper, we review dataflow architectures, examine neural network simulation performance on a new generation dataflow machine, compare that performance to other simulation alternatives, and discuss the benefits and drawbacks of the dataflow approach.