Goto

Collaborating Authors

 Country


Inverse Dynamics of Speech Motor Control

Neural Information Processing Systems

This inverse dynamics model allows the use of a faster speech mot.or control scheme, which can be applied to phoneme-tospeech synthesis via musclo-skeletal system dynamics, or to future use in speech recognition. The forward acoustic model, which is the mapping from articulator trajectories t.o the acoustic parameters, was improved by adding velocity and voicing information inputs to distinguish acollst.ic


Learning in Compositional Hierarchies: Inducing the Structure of Objects from Data

Neural Information Processing Systems

Model-based object recognition solves the problem of invariant recognition by relying on stored prototypes at unit scale positioned at the origin of an object-centered coordinate system. Elastic matching techniques are used to find a correspondence between features of the stored model and the data and can also compute the parameters of the transformation the observed instance has undergone relative to the stored model.


Fast Pruning Using Principal Components

Neural Information Processing Systems

The assumption is that there exists an underlying (possibly noisy) functional relationship relating the outputs to the inputs y /(u,e) where e denotes the noise. The aim of the learning process is to approximate this relationship based on the the training set.


Memory-Based Methods for Regression and Classification

Neural Information Processing Systems

Memory-based learning methods operate by storing all (or most) of the training data and deferring analysis of that data until "run time" (i.e., when a query is presented and a decision or prediction must be made). When a query is received, these methods generally answer the query by retrieving and analyzing a small subset of the training data-namely, data in the immediate neighborhood of the query point. In short, memory-based methods are "lazy" (they wait until the query) and "local" (they use only a local neighborhood). The purpose of this workshop was to review the state-of-the-art in memory-based methods and to understand their relationship to "eager" and "global" learning algorithms such as batch backpropagation. There are two essential components to any memory-based algorithm: the method for defining the "local neighborhood" and the learning method that is applied to the training examples in the local neighborhood.



Asynchronous Dynamics of Continuous Time Neural Networks

Neural Information Processing Systems

Motivated by mathematical modeling, analog implementation and distributed simulation of neural networks, we present a definition of asynchronous dynamics of general CT dynamical systems defined by ordinary differential equations, based on notions of local times and communication times. We provide some preliminary results on globally asymptotical convergence of asynchronous dynamics for contractive and monotone CT dynamical systems. When applying the results to neural networks, we obtain some conditions that ensure additive-type neural networks to be asynchronizable.


A Learning Analog Neural Network Chip with Continuous-Time Recurrent Dynamics

Neural Information Processing Systems

The recurrent network, containing six continuous-time analog neurons and 42 free parameters (connection strengths and thresholds), is trained to generate time-varying outputs approximating given periodic signals presented to the network. The chip implements a stochastic perturbative algorithm, which observes the error gradient along random directions in the parameter space for error-descent learning. In addition to the integrated learning functions and the generation of pseudo-random perturbations, the chip provides for teacher forcing and long-term storage of the volatile parameters. The network learns a 1 kHz circular trajectory in 100 sec. The chip occupies 2mm x 2mm in a 2JLm CMOS process, and dissipates 1.2 m W. 1 Introduction Exact gradient-descent algorithms for supervised learning in dynamic recurrent networks [1-3] are fairly complex and do not provide for a scalable implementation in a standard 2-D VLSI process. We have implemented a fairly simple and scalable ยทPresent address: Johns Hopkins University, ECE Dept., Baltimore MD 21218-2686.


What Does the Hippocampus Compute?: A Precis of the 1993 NIPS Workshop

Neural Information Processing Systems

What Does the Hippocampus Compute?: A Precis of the 1993 NIPS Workshop Computational models of the hippocampal-region provide an important method for understanding the functional role of this brain system in learning and memory. The presentations in this workshop focused on how modeling can lead to a unified understanding of the interplay among hippocampal physiology, anatomy, and behavior. One approach can be characterized as "top-down" analyses of the neuropsychology of memory, drawing upon brain-lesion studies in animals and humans. Other models take a "bottom-up" approach, seeking to infer emergent computational and functional properties from detailed analyses of circuit connectivity and physiology (see Gluck & Granger, 1993, for a review). Among the issues discussed were: (1) integration of physiological and behavioral theories of hippocampal function, (2) similarities and differences between animal and human studies, (3) representational vs. temporal properties of hippocampaldependent behaviors, (4) rapid vs. incremental learning, (5) mUltiple vs. unitary memory systems, (5) spatial navigation and memory, and (6) hippocampal interaction with other brain systems.


Connectionist Modeling and Parallel Architectures

Neural Information Processing Systems

University of Rochester) and ICSIM (lCSI Berkeley) allow the definition of unit types and complex connectivity patterns. On a very high level of abstraction, simulators like tleam (UCSD) allow the easy realization of predefined network architectures (feedforward networks) and leaming algorithms such as backpropagation. Ben Gomes, International Computer Science Institute (Berkeley) introduced the Connectionist Supercomputer 1. The CNSl is a multiprocessor system designed for moderate precision fixed point operations used extensively in connectionist network calculations. Custom VLSI digital processors employ an on-chip vector coprocessor unit tailored for neural network calculations and controlled by RISC scalar CPU.


Agnostic PAC-Learning of Functions on Analog Neural Nets

Neural Information Processing Systems

Abstract: There exist a number of negative results ([J), [BR), [KV]) about learning on neural nets in Valiant's model [V) for probably approximately correct learning ("PAClearning"). These negative results are based on an asymptotic analysis where one lets the number of nodes in the neural net go to infinit.y. Hence this analysis is less adequate for the investigation of learning on a small fixed neural net.