Information Technology
MURPHY: A Robot that Learns by Doing
Current Focus Of Learning Research Most connectionist learning algorithms may be grouped into three general catagories, commonly referred to as supenJised, unsupenJised, and reinforcement learning. Supervised learning requires the explicit participation of an intelligent teacher, usually to provide the learning system with task-relevant input-output pairs (for two recent examples, see [1,2]). Unsupervised learning, exemplified by "clustering" algorithms, are generally concerned with detecting structure in a stream of input patterns [3,4,5,6,7]. In its final state, an unsupervised learning system will typically represent the discovered structure as a set of categories representing regions of the input space, or, more generally, as a mapping from the input space into a space of lower dimension that is somehow better suited to the task at hand. In reinforcement learning, a "critic" rewards or penalizes the learning system, until the system ultimately produces the correct output in response to a given input pattern [8]. It has seemed an inevitable tradeoff that systems needing to rapidly learn specific, behaviorally useful input-output mappings must necessarily do so under the auspices of an intelligent teacher with a ready supply of task-relevant training examples. This state of affairs has seemed somewhat paradoxical, since the processes of Rerceptual and cognitive development in human infants, for example, do not depend on the moment by moment intervention of a teacher of any sort. Learning by Doing The current work has been focused on a fourth type of learning algorithm, i.e. learning-bydoing, an approach that has been very little studied from either a connectionist perspective
Using Neural Networks to Improve Cochlear Implant Speech Perception
An increasing number of profoundly deaf patients suffering from sensorineural deafness are using cochlear implants as prostheses. Mter the implant, sound can be detected through the electrical stimulation of the remaining peripheral auditory nervous system. Although great progress has been achieved in this area, no useful speech recognition has been attained with either single or multiple channel cochlear implants. Coding evidence suggests that it is necessary for any implant which would effectively couple with the natural speech perception system to simulate the temporal dispersion and other phenomena found in the natural receptors, and currently not implemented in any cochlear implants. To this end, it is presented here a computational model using artificial neural networks (ANN) to incorporate the natural phenomena in the artificial cochlear.
PARTITIONING OF SENSORY DATA BY A CORTICAL NETWORK
Granger, Richard, Ambros-Ingerson, Jose, Henry, Howard, Lynch, Gary
SUMMARY To process sensory data, sensory brain areas must preserve information about both the similarities and differences among learned cues: without the latter, acuity would be lost, whereas without the former, degraded versions of a cue would be erroneously thought to be distinct cues, and would not be recognized. We have constructed a model of piriform cortex incorporating a large number of biophysical, anatomical and physiological parameters, such as two-step excitatory firing thresholds, necessary and sufficient conditions for long-term potentiation (LTP) of synapses, three distinct types of inhibitory currents (short IPSPs, long hyperpolarizing currents (LHP) and long cellspecific afterhyperpolarization (AHP)), sparse connectivity between bulb and layer-II cortex, caudally-flowing excitatory collateral fibers, nonlinear dendritic summation, etc. We have tested the model for its ability to learn similarity-and difference-preserving encodings of incoming sensory cueSj the biological characteristics of the model enable it to produce multiple encodings of each input cue in such a way that different readouts of the cell firing activity of the model preserve both similarity and difference'information. In particular, probabilistic quantal transmitter-release properties of piriform synapses give rise to probabilistic postsynaptic voltage levels which, in combination with the activity of local patches of inhibitory interneurons in layer II, differentially select bursting vs. single-pulsing layer-II cells. Time-locked firing to the theta rhythm (Larson and Lynch, 1986) enables distinct spatial patterns to be read out against a relatively quiescent background firing rate. Training trials using the physiological rules for induction of LTP yield stable layer-II-cell spatial firing patterns for learned cues. Multiple simulated olfactory input patterns (Le., those that share many chemical features) will give rise to strongly-overlapping bulb firing patterns, activating many shared lateral olfactory tract (LOT) axons innervating layer Ia of piriform cortex, which in tum yields highly overlapping layer-II-cell excitatory potentials, enabling this spatial layer-II-cell encoding to preserve the overlap (similarity) among similar inputs. At the same time, those synapses that are enhanced by the learning process cause stronger cell firing, yielding strong, cell-specific afterhyperpolarizing (AHP) currents. Local inhibitory intemeurons effectively select alternate cells to fire once strongly-firing cells have undergone AHP. These alternate cells then activate their caudally-flowing recurrent collaterals, activating distinct populations of synapses in caudal layer lb.
Learning in Networks of Nondeterministic Adaptive Logic Elements
LEARNING IN NETWORKS OF NONDETERMINISTIC ADAPTIVE LOGIC ELEMENTS Richard C. Windecker* AT&T Bell Laboratories, Middletown, NJ 07748 ABSTRACT This paper presents a model of nondeterministic adaptive automata that are constructed from simpler nondeterministic adaptive information processing elements. The first half of the paper describes the model. Chief among these properties is that network aggregates of the model elements can adapt appropriately when a single reinforcement channel provides the same positive or negative reinforcement signal to all adaptive elements of the network at the same time. This holds for multiple-input, multiple-output, multiple-layered, combinational and sequential networks. It also holds when some network elements are "hidden" in that their outputs are not directly seen by the external environment. INTRODUCTION There are two primary motivations for studying models of adaptive automata constructed from simple parts. First, they let us learn things about real biological systems whose properties are difficult to study directly: We form a hypothesis about such systems, embody it in a model, and then see if the model has reasonable learning and behavioral properties. In the present work, the hypothesis being tested is: that much of an animal's behavior as determined by its nervous system is intrinsically nondeterministic; that learning consists of incremental changes in the probabilities governing the animal's behavior; and that this is a consequence of the animal's nervous system consisting of an aggregate of information processing elements some of which are individually nondeterministic and adaptive. The second motivation for studying models of this type is to find ways of building machines that can learn to do (artificially) intelligent and practical things.
New Hardware for Massive Neural Networks
Coon, Darryl D., Perera, A. G. Unil
ABSTRACT Transient phenomena associated with forward biased silicon p - n - n structures at 4.2K show remarkable similarities with biological neurons. The devices play a role similar to the two-terminal switching elements in Hodgkin-Huxley equivalent circuit diagrams. The devices provide simpler and more realistic neuron emulation than transistors or op-amps. They have such low power and current requirements that they could be used in massive neural networks. Some observed properties of simple circuits containing the devices include action potentials, refractory periods, threshold behavior, excitation, inhibition, summation over synaptic inputs, synaptic weights, temporal integration, memory, network connectivity modification based on experience, pacemaker activity, firing thresholds, coupling to sensors with graded signal outputs and the dependence of firing rate on input current. Transfer functions for simple artificial neurons with spiketrain inputs and spiketrain outputs have been measured and correlated with input coupling.
A Computer Simulation of Cerebral Neocortex: Computational Capabilities of Nonlinear Neural Networks
Singer, Alexander, Donoghue, John P.
A synthetic neural network simulation of cerebral neocortex was developed based on detailed anatomy and physiology. Processing elements possess temporal nonlinearities and connection patterns similar to those of cortical neurons. The network was able to replicate spatial and temporal integration properties found experimentally in neocortex. A certain level of randomness was found to be crucial for the robustness of at least some of the network's computational capabilities. Emphasis was placed on how synthetic simulations can be of use to the study of both artificial and biological neural networks.