Goto

Collaborating Authors

 North America


The Tempo 2 Algorithm: Adjusting Time-Delays By Supervised Learning

Neural Information Processing Systems

In this work we describe a new method that adjusts time-delays and the widths of time-windows in artificial neural networks automatically. The input of the units are weighted by a gaussian input-window over time which allows the learning rules for the delays and widths to be derived in the same way as it is used for the weights. Our results on a phoneme classification task compare well with results obtained with the TDNN by Waibel et al., which was manually optimized for the same task.


Learning Time-varying Concepts

Neural Information Processing Systems

This work extends computational learning theory to situations in which concepts vary over time, e.g., system identification of a time-varying plant. We have extended formal definitions of concepts and learning to provide a framework in which an algorithm can track a concept as it evolves over time. Given this framework and focusing on memory-based algorithms, we have derived some PACstyle sample complexity results that determine, for example, when tracking is feasible. We have also used a similar framework and focused on incremental tracking algorithms for which we have derived some bounds on the mistake or error rates for some specific concept classes. 1 INTRODUCTION The goal of our ongoing research is to extend computational learning theory to include concepts that can change or evolve over time. For example, face recognition is complicated by the fact that a persons face changes slowly with age and more quickly with changes in make up, hairstyle, or facial hair.



Adjoint-Functions and Temporal Learning Algorithms in Neural Networks

Neural Information Processing Systems

The development of learning algorithms is generally based upon the minimization of an energy function. It is a fundamental requirement to compute the gradient of this energy function with respect to the various parameters of the neural architecture, e.g., synaptic weights, neural gain,etc. In principle, this requires solving a system of nonlinear equations for each parameter of the model, which is computationally very expensive. A new methodology for neural learning of time-dependent nonlinear mappings is presented. It exploits the concept of adjoint operators to enable a fast global computation of the network's response to perturbations in all the systems parameters. The importance of the time boundary conditions of the adjoint functions is discussed. An algorithm is presented in which the adjoint sensitivity equations are solved simultaneously (Le., forward in time) along with the nonlinear dynamics of the neural networks. This methodology makes real-time applications and hardware implementation of temporal learning feasible.


Direct memory access using two cues: Finding the intersection of sets in a connectionist model

Neural Information Processing Systems

For lack of alternative models, search and decision processes have provided the dominant paradigm for human memory access using two or more cues, despite evidence against search as an access process (Humphreys, Wiles & Bain, 1990). We present an alternative process to search, based on calculating the intersection of sets of targets activated by two or more cues. Two methods of computing the intersection are presented, one using information about the possible targets, the other constraining the cue-target strengths in the memory matrix. Analysis using orthogonal vectors to represent the cues and targets demonstrates the competence of both processes, and simulations using sparse distributed representations demonstrate the performance of the latter process for tasks involving 2 and 3 cues.


Connection Topology and Dynamics in Lateral Inhibition Networks

Neural Information Processing Systems

We show analytically how the stability of two-dimensional lateral inhibition neural networks depends on the local connection topology. For various network topologies, we calculate the critical time delay for the onset of oscillation in continuous-time networks and present analytic phase diagrams characterizing the dynamics of discrete-time networks.


Flight Control in the Dragonfly: A Neurobiological Simulation

Neural Information Processing Systems

Neural network simulations of the dragonfly flight neurocontrol system have been developed to understand how this insect uses complex, unsteady aerodynamics. The simulation networks account for the ganglionic spatial distribution of cells as well as the physiologic operating range and the stochastic cellular fIring history of each neuron. In addition the motor neuron firing patterns, "flight command sequences", were utilized. Simulation training was targeted against both the cellular and flight motor neuron firing patterns. The trained networks accurately resynthesized the intraganglionic cellular firing patterns. These in tum controlled the motor neuron fIring patterns that drive wing musculature during flight. Such networks provide both neurobiological analysis tools and fIrst generation controls for the use of "unsteady" aerodynamics.


Statistical Mechanics of Temporal Association in Neural Networks

Neural Information Processing Systems

Basic computational functions of associative neural structures may be analytically studied within the framework of attractor neural networks where static patterns are stored as stable fixed-points for the system's dynamics. If the interactions between single neurons are instantaneous and mediated by symmetric couplings, there is a Lyapunov function for the retrieval dynamics (Hopfield 1982). The global computation corresponds in that case to a downhill motion in an energy landscape created by the stored information. Methods of equilibrium statistical mechanics may be applied and permit a quantitative analysis of the asymptotic network behavior (Amit et al. 1985, 1987). The existence of a Lyapunov function is thus of great conceptual as well as technical importance. Nevertheless, one should be aware that environmental inputs to a neural net always provide information in both space and time. It is therefore desirable to extend the original Hopfield scheme and to explore possibilities for a joint representation of static patterns and temporal associations.


A Method for the Efficient Design of Boltzmann Machines for Classiffication Problems

Neural Information Processing Systems

A Boltzmann machine ([AHS], [HS], [AK]) is a neural network model in which the units update their states according to a stochastic decision rule. It consists of a set U of units, a set C of unordered pairs of elements of U, and an assignment of connection strengths S: C -- R. A configuration of a Boltzmann machine is a map k: U -- {O, I}.


Back Propagation Implementation on the Adaptive Solutions CNAPS Neurocomputer Chip

Neural Information Processing Systems

Most investigators have created large parallel computers or special purpose chips limited to a small subset of algorithms. The Adaptive Solutions CNAPS architecture describes a general-purpose 64-processor chip which supports on chip learning and is capable of implementing most current algorithms. Implementation of the popular Back Propagation (BP) algorithm will demonstrate the speed and versatility of this new chip.