Goto

Collaborating Authors

 Problem Solving


Capacity for Patterns and Sequences in Kanerva's SDM as Compared to Other Associative Memory Models

Neural Information Processing Systems

ABSTRACT The information capacity of Kanerva's Sparse, Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used here, it is shown that the total information stored in these systems is proportional to the number connections in the network. The proportionality constant is the same for the SDM and HopJreld-type models independent of the particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences of spatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns. INTRODUCTION Many different models of memory and thought have been proposed by scientists over the years. The learning rule considered here uses the outer-product of patterns of Is and -Is.


The Capacity of the Kanerva Associative Memory is Exponential

Neural Information Processing Systems

THE CAPACITY OF THE KANERVA ASSOCIATIVE MEMORY IS EXPONENTIAL P. A. Chou CA 94305 ABSTRACT The capacity of an associative memory is defined as the maximum number of vords that can be stored and retrieved reliably by an address vithin a given sphere of attraction. It is shown by sphere packing arguments that as the address length increases. This exponential grovth in capacity can actually be achieved by the Kanerva associative memory. Formulas for these op.timal values are provided. The exponential grovth in capacity for the Kanerva associative memory contrasts sharply vith the sub-linear grovth in capacity for the Hopfield associative memory.


LEARNING BY STATE RECURRENCE DETECTION

Neural Information Processing Systems

The approach is applied both to Michie and Chambers BOXES algorithm and to Barto, Sutton and Anderson's extension, the ASE/ACE system, and has significantly improved the convergence rate of stochastically based learning automata. Recurrencelearning is a new nonlinear reward-penalty algorithm. It exploits information found during learning trials to reinforce decisions resulting in the recurrence of nonfailing states. Recurrence learning applies positive reinforcement during the exploration of the search space, whereas in the BOXES or ASE algorithms, only negative weight reinforcement is applied, and then only on failure. Simulation results show that the added information from recurrence learning increases the learning rate.


REFLEXIVE ASSOCIATIVE MEMORIES

Neural Information Processing Systems

The memory capac1ty Is found to be much smal1er than the Kosko upper bound, which Is the lesser of the two dimensions of the BAM. On the average, a 64x64 BAM has about 68 %of the capacity of the corresponding Hopfield memory with the same number of neurons.


Invariant Object Recognition Using a Distributed Associative Memory

Neural Information Processing Systems

Invariant Object Recognition Using a Distributed Associative Memory Harry Wechsler and George Lee Zimmerman Department or Electrical Engineering University or Minnesota Minneapolis, MN 55455 Abstract This paper describes an approach to 2-dimensional object recognition. Complex-log conformal mappingis combined with a distributed associative memory to create a system which recognizes objects regardless of changes in rotation or scale. Recalled information from the memorized database is used to classify an object, reconstruct the memorized version ofthe object, and estimate the magnitude of changes in scale or rotation. The system response is resistant to moderate amounts of noise and occlusion. Several experiments, using real,gray scale images, are presented to show the feasibility of our approach. Introduction The challenge of the visual recognition problem stems from the fact that the projection of an object onto an image can be confounded by several dimensions of variability such as uncertain perspective, changing orientation and scale, sensor noise, occlusion, and nonuniform illumination.


The Capacity of the Kanerva Associative Memory is Exponential

Neural Information Processing Systems

CA 94305 ABSTRACT The capacity of an associative memory is defined as the maximum number of vords that can be stored and retrieved reliably by an address vithin a given sphere of attraction. It is shown by sphere packing arguments that as the address length increases. This exponential grovth in capacity can actually be achieved by the Kanerva associative memory. Formulas for these op.timal values are provided. The exponential grovth in capacity for the Kanerva associative memory contrasts sharply vith the sub-linear grovth in capacity for the Hopfield associative memory.


Performance Measures for Associative Memories that Learn and Forget

Neural Information Processing Systems

The McCulloch/Pitts model discussed in [1] was one of the earliest neural network models to be analyzed. Some computational properties of what we call a Hopfield Associative Memory Network (HAMN):similar to the McCulloch/Pitts model was discussed by Hopfield in [2]. The HAMN can be measured quantitatively by defining and evaluating the information capacity as [2-6] have shown, but this network fails to exhibit more complex computational capabilities that neural network have due to its simplified structure. The HAMN belongs to a class of networks which we call static. In static networks the learning and recall procedures areseparate.


Capacity for Patterns and Sequences in Kanerva's SDM as Compared to Other Associative Memory Models

Neural Information Processing Systems

ABSTRACT The information capacity of Kanerva's Sparse, Distributed Memory (SDM) and Hopfield-type neural networks is investigated. Under the approximations used here, it is shown that the total informationstored in these systems is proportional to the number connections in the network. Theproportionality constant is the same for the SDM and HopJreld-type models independent ofthe particular model, or the order of the model. The approximations are checked numerically. This same analysis can be used to show that the SDM can store sequences ofspatiotemporal patterns, and the addition of time-delayed connections allows the retrieval of context dependent temporal patterns. A minor modification of the SDM can be used to store correlated patterns. INTRODUCTION Many different models of memory and thought have been proposed by scientists over the years.


LEARNING BY STATE RECURRENCE DETECTION

Neural Information Processing Systems

LEARNING BY ST ATE RECURRENCE DETECfION Bruce E. Rosen, James M. Goodwint, and Jacques J. Vidal University of California, Los Angeles, Ca. 90024 ABSTRACT This research investigates a new technique for unsupervised learning of nonlinear control problems. The approach is applied both to Michie and Chambers BOXES algorithm and to Barto, Sutton and Anderson's extension, the ASE/ACE system, and has significantly improved the convergence rate of stochastically based learning automata. Recurrence learning is a new nonlinear reward-penalty algorithm. It exploits information found during learning trials to reinforce decisions resulting in the recurrence of nonfailing states. Recurrence learning applies positive reinforcement during the exploration of the search space, whereas in the BOXES or ASE algorithms, only negative weight reinforcement is applied, and then only on failure. Simulation results show that the added information from recurrence learning increases the learning rate. Our empirical results show that recurrence learning is faster than both basic failure driven learning and failure prediction methods. Although recurrence learning has only been tested in failure driven experiments, there are goal directed learning applications where detection of recurring oscillations may provide useful information that reduces the learning time by applying negative, instead of positive reinforcement.


Performance Measures for Associative Memories that Learn and Forget

Neural Information Processing Systems

Recently, many modifications to the McCulloch/Pitts model have been proposed where both learning and forgetting occur. Given that the network never saturates (ceases to function effectively due to an overload of information), the learning updates can continue indefinitely. For these networks, we need to introduce performance measmes in addition to the information capacity to evaluate the different networks. We mathematically define quantities such as the plasticity of a network, the efficacy of an information vector, and the probability of network saturation. From these quantities we analytically compare different networks.