Goto

Collaborating Authors

 Neural Information Processing Systems


Neural Network Models of Chemotaxis in the Nematode Caenorhabditis Elegans

Neural Information Processing Systems

Thomas C. Ferree, Ben A. Marcotte, Shawn R. Lockery Institute of Neuroscience, University of Oregon, Eugene, Oregon 97403 Abstract We train recurrent networks to control chemotaxis in a computer model of the nematode C. elegans. The model presented is based closely on the body mechanics, behavioral analyses, neuroanatomy and neurophysiology of C. elegans, each imposing constraints relevant forinformation processing. Simulated worms moving autonomously insimulated chemical environments display a variety of chemotaxis strategies similar to those of biological worms. 1 INTRODUCTION The nematode C. elegans provides a unique opportunity to study the neuronal basis ofneural computation in an animal capable of complex goal-oriented behaviors. The adult hermaphrodite is only 1 mm long, and has exactly 302 neurons and 95 muscle cells. The morphology of every cell and the location of most electrical and chemical synapses are known precisely (White et al., 1986), making C. elegans especially attractivefor study.


Are Hopfield Networks Faster than Conventional Computers?

Neural Information Processing Systems

It is shown that conventional computers can be exponentiallx faster than planar Hopfield networks: although there are planar Hopfield networks that take exponential time to converge, a stable state of an arbitrary planar Hopfield network can be found by a conventional computer in polynomial time.


Reinforcement Learning for Mixed Open-loop and Closed-loop Control

Neural Information Processing Systems

Closed-loop control relies on sensory feedback that is usually assumed tobe free . But if sensing incurs a cost, it may be costeffective totake sequences of actions in open-loop mode. We describe a reinforcement learning algorithm that learns to combine open-loop and closed-loop control when sensing incurs a cost. Although weassume reliable sensors, use of open-loop control means that actions must sometimes be taken when the current state of the controlled system is uncertain. This is a special case of the hidden-state problem in reinforcement learning, and to cope, our algorithm relies on short-term memory.


Consistent Classification, Firm and Soft

Neural Information Processing Systems

A classifier is called consistent with respect to a given set of classlabeled pointsif it correctly classifies the set. We consider classifiers defined by unions of local separators and propose algorithms for consistent classifier reduction. The expected complexities of the proposed algorithms are derived along with the expected classifier sizes. In particular, the proposed approach yields a consistent reduction ofthe nearest neighbor classifier, which performs "firm" classification, assigning each new object to a class, regardless of the data structure. The proposed reduction method suggests a notion of "soft" classification, allowing for indecision with respect to objects which are insufficiently or ambiguously supported by the data. The performances of the proposed classifiers in predicting stockbehavior are compared to that achieved by the nearest neighbor method. 1 Introduction Certain classification problems, such as recognizing the digits of a hand written zipcode, requirethe assignment of each object to a class. Others, involving relatively small amounts of data and high risk, call for indecision until more data become available. Examples in such areas as medical diagnosis, stock trading and radar detection are well known. The training data for the classifier in both cases will correspond to firmly labeled members of the competing classes.


Learning Temporally Persistent Hierarchical Representations

Neural Information Processing Systems

A biologically motivated model of cortical self-organization is proposed. Contextis combined with bottom-up information via a maximum likelihood cost function. Clusters of one or more units are modulated by a common contextual gating Signal; they thereby organize themselves into mutually supportive predictors of abstract contextual features. The model was tested in its ability to discover viewpoint-invariant classes on a set of real image sequences of centered, graduallyrotating faces. It performed considerably better than supervised back-propagation at generalizing to novel views from a small number of training examples.


Regression with Input-Dependent Noise: A Bayesian Treatment

Neural Information Processing Systems

In most treatments of the regression problem it is assumed that the distribution of target data can be described by a deterministic function of the inputs, together with additive Gaussian noise having constantvariance. The use of maximum likelihood to train such models then corresponds to the minimization of a sum-of-squares error function. In many applications a more realistic model would allow the noise variance itself to depend on the input variables. However, the use of maximum likelihood to train such models would give highly biased results. In this paper we show how a Bayesian treatment can allow for an input-dependent variance while overcoming thebias of maximum likelihood. 1 Introduction In regression problems it is important not only to predict the output variables but also to have some estimate of the error bars associated with those predictions.


Sequential Tracking in Pricing Financial Options using Model Based and Neural Network Approaches

Neural Information Processing Systems

This paper shows how the prices of option contracts traded in financial marketscan be tracked sequentially by means of the Extended Kalman Filter algorithm. I consider call and put option pairs with identical strike price and time of maturity as a two output nonlinear system.The Black-Scholes approach popular in Finance literature andthe Radial Basis Functions neural network are used in modelling the nonlinear system generating these observations. I show how both these systems may be identified recursively using the EKF algorithm. I present results of simulations on some FTSE 100 Index options data and discuss the implications of viewing the pricing problem in this sequential manner. 1 INTRODUCTION Data from the financial markets has recently been of much interest to the neural computing community. The complexity of the underlying macroeconomic system and how traders react to the flow of information leads to highly nonlinear relationships betweenobservations.



A Mixture of Experts Classifier with Learning Based on Both Labelled and Unlabelled Data

Neural Information Processing Systems

We address statistical classifier design given a mixed training set consisting ofa small labelled feature set and a (generally larger) set of unlabelled features. This situation arises, e.g., for medical images, where although training features may be plentiful, expensive expertise is required toextract their class labels. We propose a classifier structure and learning algorithm that make effective use of unlabelled data to improve performance.The learning is based on maximization of the total data likelihood, i.e. over both the labelled and unlabelled data subsets. Twodistinct EM learning algorithms are proposed, differing in the EM formalism applied for unlabelled data. The classifier, based on a joint probability model for features and labels, is a "mixture of experts" structure that is equivalent to the radial basis function (RBF) classifier, but unlike RBFs, is amenable to likelihood-based training. The scope of application for the new method is greatly extended by the observation that test data, or any new data to classify, is in fact additional, unlabelled data - thus, a combined learning/classification operation - much akin to what is done in image segmentation - can be invoked whenever there is new data to classify. Experiments with data sets from the UC Irvine database demonstrate that the new learning algorithms and structure achieve substantial performance gains over alternative approaches.


Extraction of Temporal Features in the Electrosensory System of Weakly Electric Fish

Neural Information Processing Systems

The weakly electric fish, Eigenmannia, generates a quasi sinusoidal, dipole-like electric fieldat individually fixed frequencies (250 - 600 Hz) by discharging an electric organ located in its tail (see Bullock and Heilgenberg, 1986 for reviews).