Chiueh, Tzi-Dar
VLSI Implementation of a High-Capacity Neural Network Associative Memory
Chiueh, Tzi-Dar, Goodman, Rodney M.
In this paper we describe the VLSI design and testing of a high capacity associative memory which we call the exponential correlation associative memory (ECAM). The prototype 3J.'-CMOS programmable chip is capable of storing 32 memory patterns of 24 bits each. The high capacity of the ECAM is partly due to the use of special exponentiation neurons, which are implemented via sub-threshold MOS transistors in this design. The prototype chip is capable of performing one associative recall in 3 J.'S.
VLSI Implementation of a High-Capacity Neural Network Associative Memory
Chiueh, Tzi-Dar, Goodman, Rodney M.
In this paper we describe the VLSI design and testing of a high capacity associative memory which we call the exponential correlation 3J.'-CMOSassociative memory (ECAM). The prototype programmable chip is capable of storing 32 memory patterns of 24 bits each. The high capacity of the ECAM is partly due to the use of special exponentiation neurons, which are implemented via MOS transistors in this design. The prototype chipsub-threshold of performing one associative recall in 3 J.'S.is capable 1 ARCHITECTURE Previously (Chiueh, 1989), we have proposed a general model for correlation-based associative memories, which includes a variant of the Hopfield memory and highorder correlation memories as special cases. This new exponential correlation associative (ECAM) possesses a very large storage capacity, which scalesmemory exponentially with the length of memory patterns (Chiueh, 1988).
A NEURAL NETWORK CLASSIFIER BASED ON CODING THEORY
Chiueh, Tzi-Dar, Goodman, Rodney
An input vector in the feature space is transformed into an internal representation which is a codeword in the code space, and then error correction decoded in this space to classify the input feature vector to its class. Two classes of codes which give high performance are the Hadamard matrix code and the maximal length sequence code. We show that the number of classes stored in an N-neuron system is linear in N and significantly more than that obtainable by using the Hopfield type memory as a classifier. I. INTRODUCTION Associative recall using neural networks has recently received a great deal of attention. Hopfield in his papers [1,2) deSCribes a mechanism which iterates through a feedback loop and stabilizes at the memory element that is nearest the input, provided that not many memory vectors are stored in the machine. He has also shown that the number of memories that can be stored in an N-neuron system is about O.15N for N between 30 and 100. McEliece et al. in their work (3) showed that for synchronous operation of the Hopfield memory about N /(2IogN) data vectors can be stored reliably when N is large. Abu-Mostafa (4) has predicted that the upper bound for the number of data vectors in an N-neuron Hopfield machine is N. We believe that one should be able to devise a machine with M, the number of data vectors, linear in N and larger than the O.15N achieved by the Hopfield method.
A NEURAL NETWORK CLASSIFIER BASED ON CODING THEORY
Chiueh, Tzi-Dar, Goodman, Rodney
An input vector in the feature space is transformed into an internal representation which is a codeword in the code space, and then error correction decoded in this space to classify the input feature vector to its class. Two classes of codes which give high performance are the Hadamard matrix code and the maximal length sequence code. We show that the number of classes stored in an N-neuron system is linear in N and significantly more than that obtainable by using the Hopfield type memory as a classifier. I. INTRODUCTION Associative recall using neural networks has recently received a great deal of attention. Hopfield in his papers [1,2) deSCribes a mechanism which iterates through a feedback loop and stabilizes at the memory element that is nearest the input, provided that not many memory vectors are stored in the machine. He has also shown that the number of memories that can be stored in an N-neuron system is about O.15N for N between 30 and 100. McEliece et al. in their work (3) showed that for synchronous operation of the Hopfield memory about N /(2IogN) data vectors can be stored reliably when N is large. Abu-Mostafa (4) has predicted that the upper bound for the number of data vectors in an N-neuron Hopfield machine is N. We believe that one should be able to devise a machine with M, the number of data vectors, linear in N and larger than the O.15N achieved by the Hopfield method.
A NEURAL NETWORK CLASSIFIER BASED ON CODING THEORY
Chiueh, Tzi-Dar, Goodman, Rodney
An input vector in the feature space is transformed into an internal representation which is a codeword in the code space, and then error correction decoded in this space to classify the input feature vector to its class. Two classes of codes which give high performance are the Hadamard matrix code and the maximal length sequence code.