Goto

Collaborating Authors

 Neural Information Processing Systems


Encoding Geometric Invariances in Higher-Order Neural Networks

Neural Information Processing Systems

ENCODING GEOMETRIC INVARIANCES IN HIGHER-ORDER NEURAL NETWORKS C.L. Giles Air Force Office of Scientific Research, Bolling AFB, DC 20332 R.D. Griffin Naval Research Laboratory, Washington, DC 20375-5000 T. Maxwell Sachs-Freeman Associates, Landover, MD 20785 ABSTRACT We describe a method of constructing higher-order neural networks that respond invariantly under geometric transformations on the input space. By requiring each unit to satisfy a set of constraints on the interconnection weights, a particular structure is imposed on the network. A network built using such an architecture maintains its invariant performance independent of the values the weights assume, of the learning rules used, and of the form of the nonlinearities in the network. The invariance exhibited by a firstorder network is usually of a trivial sort, e.g., responding only to the average input in the case of translation invariance, whereas higher-order networks can perform useful functions and still exhibit the invariance. We derive the weight constraints for translation, rotation, scale, and several combinations of these transformations, and report results of simulation studies.


Basins of Attraction for Electronic Neural Networks

Neural Information Processing Systems

In a useful associative memory, an initial state should lead reliably to the "closest" memory. This requirement suggests that a well-behaved basin of attraction should evenly surround its attractor and have a smooth and regular shape. One dimensional basin maps plotting "pull in" probability against Hamming distance from an attract or do not reveal the shape of the basin in the high dimensional space of initial states9.


Neuromorphic Networks Based on Sparse Optical Orthogonal Codes

Neural Information Processing Systems

Synthetic neural nets[1,2] represent an active and growing research field. Fundamental issues, as well as practical implementations with electronic and optical devices are being studied. In addition, several learning algorithms have been studied, for example stochastically adaptive systems[3] based on many-body physics optimization concepts[4,5]. Signal processing in the optical domain has also been an active field of research. A wide variety of nonlinear all-optical devices are being studied, directed towards applications both in optical computating and in optical switching.


REFLEXIVE ASSOCIATIVE MEMORIES

Neural Information Processing Systems

REFLEXIVE ASSOCIATIVE MEMORIES Hendrlcus G. Loos Laguna Research Laboratory, Fallbrook, CA 92028-9765 ABSTRACT In the synchronous discrete model, the average memory capacity of bidirectional associative memories (BAMs) is compared with that of Hopfield memories, by means of a calculat10n of the percentage of good recall for 100 random BAMs of dimension 64x64, for different numbers of stored vectors. The memory capac1ty Is found to be much smal1er than the Kosko upper bound, which Is the lesser of the two dimensions of the BAM. On the average, a 64x64 BAM has about 68 % of the capacity of the corresponding Hopfield memory with the same number of neurons. The memory capacity limitations are due to spurious stable states, which arise In BAMs In much the same way as in Hopfleld memories. Occurrence of spurious stable states can be avoided by replacing the thresholding in the backlayer of the BAM by another nonl1near process, here called "Dominant Label Selection" (DLS).



MURPHY: A Robot that Learns by Doing

Neural Information Processing Systems

Current Focus Of Learning Research Most connectionist learning algorithms may be grouped into three general catagories, commonly referred to as supenJised, unsupenJised, and reinforcement learning. Supervised learning requires the explicit participation of an intelligent teacher, usually to provide the learning system with task-relevant input-output pairs (for two recent examples, see [1,2]). Unsupervised learning, exemplified by "clustering" algorithms, are generally concerned with detecting structure in a stream of input patterns [3,4,5,6,7]. In its final state, an unsupervised learning system will typically represent the discovered structure as a set of categories representing regions of the input space, or, more generally, as a mapping from the input space into a space of lower dimension that is somehow better suited to the task at hand. In reinforcement learning, a "critic" rewards or penalizes the learning system, until the system ultimately produces the correct output in response to a given input pattern [8]. It has seemed an inevitable tradeoff that systems needing to rapidly learn specific, behaviorally useful input-output mappings must necessarily do so under the auspices of an intelligent teacher with a ready supply of task-relevant training examples. This state of affairs has seemed somewhat paradoxical, since the processes of Rerceptual and cognitive development in human infants, for example, do not depend on the moment by moment intervention of a teacher of any sort. Learning by Doing The current work has been focused on a fourth type of learning algorithm, i.e. learning-bydoing, an approach that has been very little studied from either a connectionist perspective


Using Neural Networks to Improve Cochlear Implant Speech Perception

Neural Information Processing Systems

An increasing number of profoundly deaf patients suffering from sensorineural deafness are using cochlear implants as prostheses. Mter the implant, sound can be detected through the electrical stimulation of the remaining peripheral auditory nervous system. Although great progress has been achieved in this area, no useful speech recognition has been attained with either single or multiple channel cochlear implants. Coding evidence suggests that it is necessary for any implant which would effectively couple with the natural speech perception system to simulate the temporal dispersion and other phenomena found in the natural receptors, and currently not implemented in any cochlear implants. To this end, it is presented here a computational model using artificial neural networks (ANN) to incorporate the natural phenomena in the artificial cochlear.


PARTITIONING OF SENSORY DATA BY A CORTICAL NETWORK

Neural Information Processing Systems

SUMMARY To process sensory data, sensory brain areas must preserve information about both the similarities and differences among learned cues: without the latter, acuity would be lost, whereas without the former, degraded versions of a cue would be erroneously thought to be distinct cues, and would not be recognized. We have constructed a model of piriform cortex incorporating a large number of biophysical, anatomical and physiological parameters, such as two-step excitatory firing thresholds, necessary and sufficient conditions for long-term potentiation (LTP) of synapses, three distinct types of inhibitory currents (short IPSPs, long hyperpolarizing currents (LHP) and long cellspecific afterhyperpolarization (AHP)), sparse connectivity between bulb and layer-II cortex, caudally-flowing excitatory collateral fibers, nonlinear dendritic summation, etc. We have tested the model for its ability to learn similarity-and difference-preserving encodings of incoming sensory cueSj the biological characteristics of the model enable it to produce multiple encodings of each input cue in such a way that different readouts of the cell firing activity of the model preserve both similarity and difference'information. In particular, probabilistic quantal transmitter-release properties of piriform synapses give rise to probabilistic postsynaptic voltage levels which, in combination with the activity of local patches of inhibitory interneurons in layer II, differentially select bursting vs. single-pulsing layer-II cells. Time-locked firing to the theta rhythm (Larson and Lynch, 1986) enables distinct spatial patterns to be read out against a relatively quiescent background firing rate. Training trials using the physiological rules for induction of LTP yield stable layer-II-cell spatial firing patterns for learned cues. Multiple simulated olfactory input patterns (Le., those that share many chemical features) will give rise to strongly-overlapping bulb firing patterns, activating many shared lateral olfactory tract (LOT) axons innervating layer Ia of piriform cortex, which in tum yields highly overlapping layer-II-cell excitatory potentials, enabling this spatial layer-II-cell encoding to preserve the overlap (similarity) among similar inputs. At the same time, those synapses that are enhanced by the learning process cause stronger cell firing, yielding strong, cell-specific afterhyperpolarizing (AHP) currents. Local inhibitory intemeurons effectively select alternate cells to fire once strongly-firing cells have undergone AHP. These alternate cells then activate their caudally-flowing recurrent collaterals, activating distinct populations of synapses in caudal layer lb.


A 'Neural' Network that Learns to Play Backgammon

Neural Information Processing Systems

QUALITATIVE RESULTS Analysis of the weights produced by training a network is an exceedingly difficult problem, which we have only been able to approach qualitatively. In Figure 1 we present a diagram showing the connection strengths in a network with 651 input units and no hidden units.


Learning in Networks of Nondeterministic Adaptive Logic Elements

Neural Information Processing Systems

LEARNING IN NETWORKS OF NONDETERMINISTIC ADAPTIVE LOGIC ELEMENTS Richard C. Windecker* AT&T Bell Laboratories, Middletown, NJ 07748 ABSTRACT This paper presents a model of nondeterministic adaptive automata that are constructed from simpler nondeterministic adaptive information processing elements. The first half of the paper describes the model. Chief among these properties is that network aggregates of the model elements can adapt appropriately when a single reinforcement channel provides the same positive or negative reinforcement signal to all adaptive elements of the network at the same time. This holds for multiple-input, multiple-output, multiple-layered, combinational and sequential networks. It also holds when some network elements are "hidden" in that their outputs are not directly seen by the external environment. INTRODUCTION There are two primary motivations for studying models of adaptive automata constructed from simple parts. First, they let us learn things about real biological systems whose properties are difficult to study directly: We form a hypothesis about such systems, embody it in a model, and then see if the model has reasonable learning and behavioral properties. In the present work, the hypothesis being tested is: that much of an animal's behavior as determined by its nervous system is intrinsically nondeterministic; that learning consists of incremental changes in the probabilities governing the animal's behavior; and that this is a consequence of the animal's nervous system consisting of an aggregate of information processing elements some of which are individually nondeterministic and adaptive. The second motivation for studying models of this type is to find ways of building machines that can learn to do (artificially) intelligent and practical things.