Country
Pairwise Neural Network Classifiers with Probabilistic Outputs
Price, David, Knerr, Stefan, Personnaz, Léon, Dreyfus, Gérard
Multi-class classification problems can be efficiently solved by partitioning the original problem into sub-problems involving only two classes: for each pair of classes, a (potentially small) neural network is trained using only the data of these two classes. We show how to combine the outputs of the two-class neural networks in order to obtain posterior probabilities for the class decisions. The resulting probabilistic pairwise classifier is part of a handwriting recognition system which is currently applied to check reading. We present results on real world data bases and show that, from a practical point of view, these results compare favorably to other neural network approaches.
SARDNET: A Self-Organizing Feature Map for Sequences
James, Daniel L., Miikkulainen, Risto
A self-organizing neural network for sequence classification called SARDNET is described and analyzed experimentally. SARDNET extends the Kohonen Feature Map architecture with activation retention anddecay in order to create unique distributed response patterns for different sequences. SARDNET yields extremely dense yet descriptive representations of sequential input in very few training iterations.The network has proven successful on mapping arbitrary sequencesof binary and real numbers, as well as phonemic representations of English words. Potential applications include isolated spoken word recognition and cognitive science models of sequence processing. 1 INTRODUCTION While neural networks have proved a good tool for processing static patterns, classifying sequentialinformation has remained a challenging task. The problem involves recognizing patterns in a time series of vectors, which requires forming a good internal representationfor the sequences. Several researchers have proposed extending the self-organizing feature map (Kohonen 1989, 1990), a highly successful static pattern classification method, to sequential information (Kangas 1991; Samarabandu andJakubowicz 1990; Scholtes 1991). Below, three of the most recent of these networks are briefly described. The remainder of the paper focuses on a new architecture designed to overcome the shortcomings of these approaches.
An Integrated Architecture of Adaptive Neural Network Control for Dynamic Systems
Liu, Ke, Tokar, Robert L., McVey, Brain D.
Most of the recent emphasis in the neural network control field has no error feedback as the control input, which rises the lack of adaptation problem. The integrated architecture in this paper combines feed forward control and error feedback adaptive control using neural networks. The paper reveals the different internal functionality of these two kinds of neural network controllers for certain input styles, e.g., state feedback and error feedback. With error feedback, neural network controllers learn the slopes or the gains with respect to the error feedback, producing an error driven adaptive control systems. The results demonstrate that the two kinds of control scheme can be combined to realize their individual advantages. Testing with disturbances added to the plant shows good tracking and adaptation with the integrated neural control architecture.
An Auditory Localization and Coordinate Transform Chip
The localization and orientation to various novel or interesting events in the environment is a critical sensorimotor ability in all animals, predator or prey. In mammals, the superior colliculus (SC) plays a major role in this behavior, the deeper layers exhibiting topographicallymapped responses to visual, auditory, and somatosensory stimuli. Sensory information arriving from different modalitiesshould then be represented in the same coordinate frame. Auditory cues, in particular, are thought to be computed in head-based coordinates which must then be transformed to retinal coordinates.In this paper, an analog VLSI implementation for auditory localization in the azimuthal plane is described which extends thearchitecture proposed for the barn owl to a primate eye movement system where further transformation is required. This transformation is intended to model the projection in primates from auditory cortical areas to the deeper layers of the primate superior colliculus. This system is interfaced with an analog VLSI-based saccadic eye movement system also being constructed in our laboratory.
Coarse-to-Fine Image Search Using Neural Networks
Spence, Clay, Pearson, John C., Bergen, Jim
The efficiency of image search can be greatly improved by using a coarse-to-fine search strategy with a multi-resolution image representation. However,if the resolution is so low that the objects have few distinguishing features,search becomes difficult. We show that the performance of search at such low resolutions can be improved by using context information, i.e., objects visible at low-resolution which are not the objects of interest but are associated with them. The networks can be given explicit context information as inputs, or they can learn to detect the context objects, in which case the user does not have to be aware of their existence. We also use Integrated Feature Pyramids, which represent high-frequencyinformation at low resolutions. The use of multiresolution searchtechniques allows us to combine information about the appearance of the objects on many scales in an efficient way. A natural fOlm of exemplar selection also arises from these techniques. We illustrate theseideas by training hierarchical systems of neural networks to find clusters of buildings in aerial photographs of farmland.
A Neural Model of Delusions and Hallucinations in Schizophrenia
Ruppin, Eytan, Reggia, James A., Horn, David
We implement and study a computational model of Stevens' [19921 theory of the pathogenesis of schizophrenia. This theory hypothesizes thatthe onset of schizophrenia is associated with reactive synaptic regeneration occurring in brain regions receiving degenerating temporallobe projections. Concentrating on one such area, the frontal cortex, we model a frontal module as an associative memory neural network whose input synapses represent incoming temporal projections. We analyze how, in the face of weakened external input projections, compensatory strengthening of internal synaptic connections and increased noise levels can maintain memory capacities(which are generally preserved in schizophrenia). However, These compensatory changes adversely lead to spontaneous, biasedretrieval of stored memories, which corresponds to the occurrence of schizophrenic delusions and hallucinations without anyapparent external trigger, and for their tendency to concentrate onjust few central themes. Our results explain why these symptoms tend to wane as schizophrenia progresses, and why delayed therapeuticalintervention leads to a much slower response.
Template-Based Algorithms for Connectionist Rule Extraction
Alexander, Jay A., Mozer, Michael C.
Casting neural network weights in symbolic terms is crucial for interpreting and explaining the behavior of a network. Additionally, in some domains, a symbolic description may lead to more robust generalization. We present a principled approach to symbolic rule extraction based on the notion of weight templates, parameterized regions of weight space corresponding to specific symbolic expressions. With an appropriate choice of representation, we show how template parameters may be efficiently identified and instantiated to yield the optimal match to a unit's actual weights. Depending on the requirements of the application domain, our method can accommodate arbitrary disjunctions and conjunctions with O(k) complexity, simple n-of-m expressions with O(k!)