Goto

Collaborating Authors

 Country



Repeat Until Bored: A Pattern Selection Strategy

Neural Information Processing Systems

An alternative to the typical technique of selecting training examples independently from a fixed distribution is fonnulated and analyzed, in which the current example is presented repeatedly until the error for that item is reduced to some criterion value,; then, another item is randomly selected. The convergence time can be dramatically increased or decreased by this heuristic, depending on the task, and is very sensitive to the value of .


Interpretation of Artificial Neural Networks: Mapping Knowledge-Based Neural Networks into Rules

Neural Information Processing Systems

We propose and empirically evaluate a method for the extraction of expertcomprehensible rules from trained neural networks. Our method operates in the context of a three-step process for learning that uses rule-based domain knowledge in combination with neural networks. Empirical tests using realworlds problems from molecular biology show that the rules our method extracts from trained neural networks: closely reproduce the accuracy of the network from which they came, are superior to the rules derived by a learning system that directly refines symbolic rules, and are expert-comprehensible.


Merging Constrained Optimisation with Deterministic Annealing to "Solve" Combinatorially Hard Problems

Neural Information Processing Systems

Several parallel analogue algorithms, based upon mean field theory (MFT) approximations to an underlying statistical mechanics formulation, and requiring an externally prescribed annealing schedule, now exist for finding approximate solutions to difficult combinatorial optimisation problems. They have been applied to the Travelling Salesman Problem (TSP), as well as to various issues in computational vision and cluster analysis. I show here that any given MFT algorithm can be combined in a natural way with notions from the areas of constrained optimisation and adaptive simulated annealing to yield a single homogenous and efficient parallel relaxation technique, for which an externally prescribed annealing schedule is no longer required. The results of numerical simulations on 50-city and 100-city TSP problems are presented, which show that the ensuing algorithms are typically an order of magnitude faster than the MFT algorithms alone, and which also show, on occasion, superior solutions as well. 1 INTRODUCTION Several promising parallel analogue algorithms, which can be loosely described by the term "deterministic annealing", or "mean field theory (MFT) annealing", have *also at Theoretical Division and Center for Nonlinear Studies, MSB213, Los Alamos National Laboratory, Los Alamos, NM 87545.


A Self-Organizing Integrated Segmentation and Recognition Neural Net

Neural Information Processing Systems

Standard pattern recognition systems usually involve a segmentation step prior to the recognition step. For example, it is very common in character recognition to segment characters in a pre-processing step then normalize the individual characters and pass them to a recognition engine such as a neural network, as in the work of LeCun et al. 1988, Martin and Pittman (1988). This separation between segmentation and recognition becomes unreliable if the characters are touching each other, touching bounding boxes, broken, or noisy. Other applications such as scene analysis or continuous speech recognition pose similar and more severe segmentation problems. The difficulties encountered in these applications present an apparent dilemma: one cannot recognize the patterns *keeler@mcc.com



A Topographic Product for the Optimization of Self-Organizing Feature Maps

Neural Information Processing Systems

Self-organizing feature maps like the Kohonen map (Kohonen, 1989, Ritter et al., 1990) not only provide a plausible explanation for the formation of maps in brains, e.g. in the visual system (Obermayer et al., 1990), but have also been applied to problems like vector quantization, or robot arm control (Martinetz et al., 1990). The underlying organizing principle is the preservation of neighborhood relations. For this principle to lead to a most useful map, the topological structure of the output space must roughly fit the structure of the input data. However, in technical 1141 1142 Bauer, Pawelzik, and Geisel applications this structure is often not a priory known. For this reason several attempts have been made to modify the Kohonen-algorithm such, that not only the weights, but also the output space topology itself is adapted during learning (Kangas et al., 1990, Martinetz et al., 1991). Our contribution is also concerned with optimal output space topologies, but we follow a different approach, which avoids a possibly complicated structure of the output space. First we describe a quantitative measure for the preservation of neighborhood relations in maps, the topographic product P. The topographic product had been invented under the name of" wavering product" in nonlinear dynamics in order to optimize the embeddings of chaotic attractors (Liebert et al., 1991).


Self-organization in real neurons: Anti-Hebb in 'Channel Space'?

Neural Information Processing Systems

Ion channels are the dynamical systems of the nervous system. Their distribution within the membrane governs not only communication of information between neurons, but also how that information is integrated within the cell. Here, an argument is presented for an'anti-Hebbian' rule for changing the distribution of voltage-dependent ion channels in order to flatten voltage curvatures in dendrites. Simulations show that this rule can account for the self-organisation of dynamical receptive field properties such as resonance and direction selectivity. It also creates the conditions for the faithful conduction within the cell of signals to which the cell has been exposed. Various possible cellular implementations of such a learning rule are proposed, including activity-dependent migration of channel proteins in the plane of the membrane.


Iterative Construction of Sparse Polynomial Approximations

Neural Information Processing Systems

We present an iterative algorithm for nonlinear regression based on construction of sparse polynomials. Polynomials are built sequentially from lower to higher order. Selection of new terms is accomplished using a novel look-ahead approach that predicts whether a variable contributes to the remaining error. The algorithm is based on the tree-growing heuristic in LMS Trees which we have extended to approximation of arbitrary polynomials of the input features. In addition, we provide a new theoretical justification for this heuristic approach.


VISIT: A Neural Model of Covert Visual Attention

Neural Information Processing Systems

Visual attention is the ability to dynamically restrict processing to a subset of the visual field. Researchers have long argued that such a mechanism is necessary to efficiently perform many intermediate level visual tasks. This paper describes VISIT, a novel neural network model of visual attention.