Ruppin, Eytan
Patterns of damage in neural networks: The effects of lesion area, shape and number
Ruppin, Eytan, Reggia, James A.
Current understanding of the effects of damage on neural networks is rudimentary, even though such understanding could lead to important insights concerning neurological and psychiatric disorders. Motivated by this consideration, we present a simple analytical framework for estimating the functional damage resulting from focal structural lesions to a neural network.
Patterns of damage in neural networks: The effects of lesion area, shape and number
Ruppin, Eytan, Reggia, James A.
Current understanding of the effects of damage on neural networks is rudimentary, even though such understanding could lead to important insights concerning neurological and psychiatric disorders. Motivated by this consideration, we present a simple analytical framework for estimating the functional damage resulting from focal structural lesions to a neural network.
A Neural Model of Delusions and Hallucinations in Schizophrenia
Ruppin, Eytan, Reggia, James A., Horn, David
We implement and study a computational model of Stevens' [19921 theory of the pathogenesis of schizophrenia. This theory hypothesizes thatthe onset of schizophrenia is associated with reactive synaptic regeneration occurring in brain regions receiving degenerating temporallobe projections. Concentrating on one such area, the frontal cortex, we model a frontal module as an associative memory neural network whose input synapses represent incoming temporal projections. We analyze how, in the face of weakened external input projections, compensatory strengthening of internal synaptic connections and increased noise levels can maintain memory capacities(which are generally preserved in schizophrenia). However, These compensatory changes adversely lead to spontaneous, biasedretrieval of stored memories, which corresponds to the occurrence of schizophrenic delusions and hallucinations without anyapparent external trigger, and for their tendency to concentrate onjust few central themes. Our results explain why these symptoms tend to wane as schizophrenia progresses, and why delayed therapeuticalintervention leads to a much slower response.
Patterns of damage in neural networks: The effects of lesion area, shape and number
Ruppin, Eytan, Reggia, James A.
Understanding the response of neural nets to structural/functional damage is important fora variety of reasons, e.g., in assessing the performance of neural network hardware, and in gaining understanding of the mechanisms underlying neurological andpsychiatric disorders. Recently, there has been a growing interest in constructing neuralmodels to study how specific pathological neuroanatomical and neurophysiological changes can result in various clinical manifestations, and to investigate thefunctional organization of the symptoms that result from specific brain pathologies (reviewed in [1, 2]). In the area of associative memory models specifically, earlystudies found an increase in memory impairment with increasing lesion severity (in accordance with Lashley's classical'mass action' principle), and showed that slowly developing lesions have less pronounced effects than equivalent acute lesions [3]. Recently, it was shown that the gradual pattern of clinical deterioration manifested in the majority of Alzheimer's patients can be accounted for, and that different synaptic compensation rates can account for the observed variation in the severity and progression rate of this disease [4]. However, this past work is limited in that model elements have no spatial relationships to one another (all elements are conceptually equidistant).
A Neural Model of Delusions and Hallucinations in Schizophrenia
Ruppin, Eytan, Reggia, James A., Horn, David
We implement and study a computational model of Stevens' [19921 theory of the pathogenesis of schizophrenia. This theory hypothesizes that the onset of schizophrenia is associated with reactive synaptic regeneration occurring in brain regions receiving degenerating temporal lobe projections. Concentrating on one such area, the frontal cortex, we model a frontal module as an associative memory neural network whose input synapses represent incoming temporal projections. We analyze how, in the face of weakened external input projections, compensatory strengthening of internal synaptic connections and increased noise levels can maintain memory capacities (which are generally preserved in schizophrenia). However, These compensatory changes adversely lead to spontaneous, biased retrieval of stored memories, which corresponds to the occurrence of schizophrenic delusions and hallucinations without any apparent external trigger, and for their tendency to concentrate on just few central themes. Our results explain why these symptoms tend to wane as schizophrenia progresses, and why delayed therapeutical intervention leads to a much slower response.
Optimal Signalling in Attractor Neural Networks
Meilijson, Isaac, Ruppin, Eytan
It is well known that a given cortical neuron can respond with a different firing pattern for the same synaptic input, depending on its firing history and on the effects of modulator transmitters (see [Connors and Gutnick, 1990] for a review). The time span of different channel conductances is very broad, and the influence of some ionic currents varies with the history of the membrane potential [Lytton, 1991]. Motivated by the history-dependent nature of neuronal firing, we continue.our
Optimal Signalling in Attractor Neural Networks
Meilijson, Isaac, Ruppin, Eytan
It is well known that a given cortical neuron can respond with a different firing pattern forthe same synaptic input, depending on its firing history and on the effects of modulator transmitters (see [Connors and Gutnick, 1990] for a review). The time span of different channel conductances is very broad, and the influence of some ionic currents varies with the history of the membrane potential [Lytton, 1991]. Motivated bythe history-dependent nature of neuronal firing, we continue .our
Optimal Signalling in Attractor Neural Networks
Meilijson, Isaac, Ruppin, Eytan
It is well known that a given cortical neuron can respond with a different firing pattern for the same synaptic input, depending on its firing history and on the effects of modulator transmitters (see [Connors and Gutnick, 1990] for a review). The time span of different channel conductances is very broad, and the influence of some ionic currents varies with the history of the membrane potential [Lytton, 1991]. Motivated by the history-dependent nature of neuronal firing, we continue.our
History-Dependent Attractor Neural Networks
Meilijson, Isaac, Ruppin, Eytan
We present a methodological framework enabling a detailed description ofthe performance of Hopfield-like attractor neural networks (ANN) in the first two iterations. Using the Bayesian approach, wefind that performance is improved when a history-based term is included in the neuron's dynamics. A further enhancement of the network's performance is achieved by judiciously choosing the censored neurons (those which become active in a given iteration) onthe basis of the magnitude of their post-synaptic potentials. Thecontribution of biologically plausible, censored, historydependent dynamicsis especially marked in conditions of low firing activity and sparse connectivity, two important characteristics of the mammalian cortex. In such networks, the performance attained ishigher than the performance of two'independent' iterations, whichrepresents an upper bound on the performance of history-independent networks.
Single-Iteration Threshold Hamming Networks
Meilijson, Isaac, Ruppin, Eytan, Sipper, Moshe
Isaac Meilijson EytanRuppin Moshe Sipper School of Mathematical Sciences Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv University, 69978 Tel Aviv, Israel Abstract We analyze in detail the performance of a Hamming network classifying inputsthat are distorted versions of one of its m stored memory patterns. The activation function of the memory neurons in the original Hamming network is replaced by a simple threshold function. The THN drastically reduces the time and space complexity of Hamming Network classifiers. 1 Introduction Originally presented in (Steinbuch 1961, Taylor 1964) the Hamming network (HN) has received renewed attention in recent years (Lippmann et. The HN calculates the Hamming distance between the input pattern and each memory pattern, and selects the memory with the smallest distance. It is composed of two subnets: The similarity subnet, consisting of an n-neuron input layer connected with an m-neuron memory layer, calculates the number of equal bits between the input and each memory pattern.