Goto

Collaborating Authors

 Country


Odor Processing in the Bee: A Preliminary Study of the Role of Central Input to the Antennal Lobe

Neural Information Processing Systems

Based on precise anatomical data of the bee's olfactory system, we propose an investigation of the possible mechanisms of modulation and control between the two levels of olfactory information processing: the antennallobe glomeruli and the mushroom bodies. We use simplified neurons, but realistic architecture. As a first conclusion, we postulate that the feature extraction performed by the antennallobe (glomeruli and interneurons) necessitates central input from the mushroom bodies for fine tuning.


Non-Intrusive Gaze Tracking Using Artificial Neural Networks

Neural Information Processing Systems

We have developed an artificial neural network based gaze tracking system which can be customized to individual users. Unlike other gaze trackers, which normally require the user to wear cumbersome headgear, or to use a chin rest to ensure head immobility, our system is entirely non-intrusive. Currently, the best intrusive gaze tracking systems are accurate to approximately 0.75degrees. In our experiments, we have been able to achieve an accuracy of 1.5 degrees, while allowing head mobility. In this paper we present an empirical analysis of the performance of a large number of artificial neuralnetwork architectures for this task.


The Parti-Game Algorithm for Variable Resolution Reinforcement Learning in Multidimensional State-Spaces

Neural Information Processing Systems

Andrew W. Moore School of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 Abstract Parti-game is a new algorithm for learning from delayed rewards in high dimensional real-valued state-spaces. In high dimensions it is essential that learning does not explore or plan over state space uniformly. Parti-game maintains a decision-tree partitioning of state-space and applies game-theory and computational geometry techniquesto efficiently and reactively concentrate high resolution onlyon critical areas. Many simulated problems have been tested, ranging from 2-dimensional to 9-dimensional state-spaces, including mazes, path planning, nonlinear dynamics, and uncurling snakerobots in restricted spaces. In all cases, a good solution is found in less than twenty trials and a few minutes. 1 REINFORCEMENT LEARNING Reinforcement learning [Samuel, 1959, Sutton, 1984, Watkins, 1989, Barto et al., 1991] is a promising method for control systems to program and improve themselves.


Dynamic Modulation of Neurons and Networks

Neural Information Processing Systems

Biological neurons have a variety of intrinsic properties because of the large number of voltage dependent currents that control their activity. Neuromodulatory substances modify both the balance of conductances that determine intrinsic properties and the strength of synapses. These mechanisms alter circuit dynamics, and suggest that functional circuits exist only in the modulatory environment in which they operate. 1 INTRODUCTION Many studies of artificial neural networks employ model neurons and synapses that are considerably simpler than their biological counterparts.


Memory-Based Methods for Regression and Classification

Neural Information Processing Systems

Moore School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Memory-based learning methods operate by storing all (or most) of the training data and deferring analysis of that data until "run time" (i.e., when a query is presented and a decision or prediction must be made). When a query is received, these methods generally answer the query by retrieving and analyzing a small subset of the training data-namely, data in the immediate neighborhood of the query point. In short, memory-based methods are "lazy" (they wait until the query) and "local" (they use only a local neighborhood). The purpose of this workshop was to review the state-of-the-art in memory-based methods and to understand their relationship to "eager" and "global" learning algorithms such as batch backpropagation. There are two essential components to any memory-based algorithm: the method for defining the "local neighborhood" and the learning method that is applied to the training examples in the local neighborhood.


Neural Network Definitions of Highly Predictable Protein Secondary Structure Classes

Neural Information Processing Systems

Alan Lapedes Complex Systems Group (TI3) LANL, MS B213 Los Alamos N.M. 87545 and The Santa Fe Institute, Santa Fe, New Mexico Evan Steeg Department of Computer Science University of Toronto, Toronto, Canada Robert Farber Complex Systems Group (TI3) LANL, MS B213 Los Alamos N.M. 87545 Abstract We use two co-evolving neural networks to determine new classes of protein secondary structure which are significantly more predictable fromlocal amino sequence than the conventional secondary structure classification. Accurate prediction of the conventional secondary structure classes: alpha helix, beta strand, and coil, from primary sequence has long been an important problem in computational molecularbiology. Neural networks have been a popular method to attempt to predict these conventional secondary structure classes.Accuracy has been disappointingly low. The algorithm presentedhere uses neural networks to similtaneously examine both sequence and structure data, and to evolve new classes of secondary structure that can be predicted from sequence with significantly higher accuracy than the conventional classes. These new classes have both similarities to, and differences with the conventional alphahelix, beta strand and coil.


Backpropagation Convergence Via Deterministic Nonmonotone Perturbed Minimization

Neural Information Processing Systems

The fundamental backpropagation (BP) algorithm for training artificial neuralnetworks is cast as a deterministic nonmonotone perturbed gradientmethod. Under certain natural assumptions, such as the series of learning rates diverging while the series of their squares converging, it is established that every accumulation point of the online BP iterates is a stationary point of the BP error function. Theresults presented cover serial and parallel online BP, modified BP with a momentum term, and BP with weight decay. 1 INTRODUCTION


Event-Driven Simulation of Networks of Spiking Neurons

Neural Information Processing Systems

A fast event-driven software simulator has been developed for simulating largenetworks of spiking neurons and synapses. The primitive network elements are designed to exhibit biologically realistic behaviors, such as spiking, refractoriness, adaptation, axonal delays, summation of post-synaptic current pulses, and tonic current inputs.The efficient event-driven representation allows large networks to be simulated in a fraction of the time that would be required for a full compartmental-model simulation. Corresponding analogCMOS VLSI circuit primitives have been designed and characterized, so that large-scale circuits may be simulated prior to fabrication. 1 Introduction Artificial neural networks typically use an abstraction of real neuron behaviour, in which the continuously varying mean firing rate of the neuron is presumed to carry the information about the neuron's time-varying state of excitation [1]. This useful simplification allows the neuron's state to be represented as a time-varying continuous-amplitude quantity. However, spike timing is known to be important in many biological systems.


An Analog VLSI Saccadic Eye Movement System

Neural Information Processing Systems

In an effort to understand saccadic eye movements and their relation tovisual attention and other forms of eye movements, we - in collaboration with a number of other laboratories - are carrying outa large-scale effort to design and build a complete primate oculomotor system using analog CMOS VLSI technology. Using this technology, a low power, compact, multi-chip system has been built which works in real-time using real-world visual inputs. We describe in this paper the performance of an early version of such a system including a 1-D array of photoreceptors mimicking the retina, a circuit computing the mean location of activity representing thesuperior colliculus, a saccadic burst generator, and a one degree-of-freedom rotational platform which models the dynamic properties of the primate oculomotor plant. 1 Introduction When we look around our environment, we move our eyes to center and stabilize objects of interest onto our fovea. In order to achieve this, our eyes move in quick jumps with short pauses in between. These quick jumps (up to 750 deg/sec in humans) areknown as saccades and are seen in both exploratory eye movements and as reflexive eye movements in response to sudden visual, auditory, or somatosensory stimuli.Since the intent of the saccade is to bring new objects of interest onto the fovea, it can be considered a primitive attentional mechanism.


Globally Trained Handwritten Word Recognizer using Spatial Representation, Convolutional Neural Networks, and Hidden Markov Models

Neural Information Processing Systems

We introduce a new approach for online recognition of handwritten wordswritten in unconstrained mixed style. The preprocessor performs a word-level normalization by fitting a model of the word structure using the EM algorithm. Words are then coded into low resolution "annotated images" where each pixel contains information abouttrajectory direction and curvature. The recognizer is a convolution network which can be spatially replicated. From the network output, a hidden Markov model produces word scores.