Goto

Collaborating Authors

 Technology


A Neural Model of Delusions and Hallucinations in Schizophrenia

Neural Information Processing Systems

We implement and study a computational model of Stevens' [19921 theory of the pathogenesis of schizophrenia. This theory hypothesizes thatthe onset of schizophrenia is associated with reactive synaptic regeneration occurring in brain regions receiving degenerating temporallobe projections. Concentrating on one such area, the frontal cortex, we model a frontal module as an associative memory neural network whose input synapses represent incoming temporal projections. We analyze how, in the face of weakened external input projections, compensatory strengthening of internal synaptic connections and increased noise levels can maintain memory capacities(which are generally preserved in schizophrenia). However, These compensatory changes adversely lead to spontaneous, biasedretrieval of stored memories, which corresponds to the occurrence of schizophrenic delusions and hallucinations without anyapparent external trigger, and for their tendency to concentrate onjust few central themes. Our results explain why these symptoms tend to wane as schizophrenia progresses, and why delayed therapeuticalintervention leads to a much slower response.




Template-Based Algorithms for Connectionist Rule Extraction

Neural Information Processing Systems

Casting neural network weights in symbolic terms is crucial for interpreting and explaining the behavior of a network. Additionally, in some domains, a symbolic description may lead to more robust generalization. We present a principled approach to symbolic rule extraction based on the notion of weight templates, parameterized regions of weight space corresponding to specific symbolic expressions. With an appropriate choice of representation, we show how template parameters may be efficiently identified and instantiated to yield the optimal match to a unit's actual weights. Depending on the requirements of the application domain, our method can accommodate arbitrary disjunctions and conjunctions with O(k) complexity, simple n-of-m expressions with O(k!)


Using a Saliency Map for Active Spatial Selective Attention: Implementation & Initial Results

Neural Information Processing Systems

School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Abstract In many vision based tasks, the ability to focus attention on the important portions of a scene is crucial for good performance on the tasks. In this paper we present a simple method of achieving spatial selective attention through the use of a saliency map. The saliency map indicates which regions of the input retina are important for performing the task. The saliency map is created throughpredictive auto-encoding. The performance of this method is demonstrated on two simple tasks which have multiple very strong distracting featuresin the input retina. Architectural extensions and application directions for this model are presented. On some tasks this extra input can easily be ignored. Nonetheless, often the similarity between the important input features and the irrelevant features is great enough to interfere with task performance.


A Growing Neural Gas Network Learns Topologies

Neural Information Processing Systems

An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the "neural gas" method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation.


From Data Distributions to Regularization in Invariant Learning

Neural Information Processing Systems

For unbiased models the regulatizer reducesto the intuitive form that penalizes the mean squared difference between the network output for transformed and untransformed inputs - i.e. the error in satisfying the desired invariance. In general the regularizer includes a term that measures correlations between the error in fitting the data, and the error in satisfying the desired inva.riance. For infinitesimal transformations, the regularizer is equivalent (up to terms linear in the variance of the transformation parameters) to the tangent prop form given by Simard et a1.


Higher Order Statistical Decorrelation without Information Loss

Neural Information Processing Systems

A neural network learning paradigm based on information theory is proposed asa way to perform in an unsupervised fashion, redundancy reduction among the elements of the output layer without loss of information fromthe sensory input. The model developed performs nonlinear decorrelation up to higher orders of the cumulant tensors and results in probabilistically independent components of the output layer. This means that we don't need to assume Gaussian distribution neither at the input nor at the output. The theory presented is related to the unsupervised-learning theoryof Barlow, which proposes redundancy reduction as the goal of cognition. When nonlinear units are used nonlinear principal componentanalysis is obtained.


Single Transistor Learning Synapses

Neural Information Processing Systems

The past few years have produced a number of efforts to design VLSI chips which "learn from experience." The first step toward this goal is developing a silicon analog for a synapse. We have successfully developed such a synapse using only 818 PaulHasler, Chris Diorio, Bradley A. Minch, Carver Mead Drain Gate


Learning Saccadic Eye Movements Using Multiscale Spatial Filters

Neural Information Processing Systems

Such sensors realize the simultaneous needfor wide field-of-view and good visual acuity. One popular class of space-variant sensors is formed by log-polar sensors which have a small area near the optical axis of greatly increased resolution (the fovea) coupled with a peripheral region that witnesses a gradual logarithmic falloff in resolution as one moves radially outward. These sensors are inspired by similar structures found in the primate retina where one finds both a peripheral region of gradually decreasing acuity and a circularly symmetric area centmlis characterized by a greater density of receptors and a disproportionate representation in the optic nerve [3]. The peripheral region, though of low visual acuity, is more sensitive to light intensity and movement. The existence of a region optimized for discrimination and recognition surrounded by a region geared towards detection thus allows the image of an object of interest detected in the outer region to be placed on the more analytic center for closer scrutiny. Such a strategy however necessitates the existence of (a) methods to determine which location in the periphery to foveate next, and (b) fast gaze-shifting mechanisms to achieve this 894 RajeshP.