Not enough data to create a plot.
Try a different view from the menu above.
Information Technology
Connectivity Versus Entropy
Yaser S. Abu-Mostafa California Institute of Technology Pasadena, CA 91125 ABSTRACT How does the connectivity of a neural network (number of synapses per neuron) relate to the complexity of the problems it can handle (measured by the entropy)? Switching theory would suggest no relation at all, since all Boolean functions can be implemented using a circuit with very low connectivity (e.g., using two-input NAND gates). However, for a network that learns a problem from examples using a local learning rule, we prove that the entropy of the problem becomes a lower bound for the connectivity of the network. INTRODUCTION The most distinguishing feature of neural networks is their ability to spontaneously learnthe desired function from'training' samples, i.e., their ability to program themselves. Clearly, a given neural network cannot just learn any function, there must be some restrictions on which networks can learn which functions.
High Order Neural Networks for Efficient Associative Memory Design
Dreyfus, Gรฉrard, Guyon, Isabelle, Nadal, Jean-Pierre, Personnaz, Lรฉon
The designed networks exhibit the desired associative memory function: perfect storage and retrieval of pieces of information and/or sequences of information of any complexity. INTRODUCTION In the field of information processing, an important class of potential applications of neural networks arises from their ability to perform as associative memories. Since the publication of J. Hopfield's seminal paper1, investigations of the storage and retrieval properties of recurrent networks have led to a deep understanding of their properties. The basic limitations of these networks are the following: - their storage capacity is of the order of the number of neurons; - they are unable to handle structured problems; - they are unable to classify non-linearly separable data. American Institute of Physics 1988 234 In order to circumvent these limitations, one has to introduce additional non-linearities. This can be done either by using "hidden", nonlinear units, or by considering multi-neuron interactions2. This paper presents learning rules for networks with multiple interactions, allowing the storage and retrieval, either of static pieces of information (autoassociative memory), or of temporal sequences (associative memory), while preventing an explosive growth of the number of synaptic coefficients. AUTOASSOCIATIVEMEMORY The problem that will be addressed in this paragraph is how to design an autoassociative memory with a recurrent (or feedback) neural network when the number p of prototypes is large as compared to the number n of neurons. We consider a network of n binary neurons, operating in a synchronous mode, with period t.
Phasor Neural Networks
ABSTRACT A novel network type is introduced which uses unit-length 2-vectors for local variables. As an example of its applications, associative memory nets are defined and their performance analyzed. Real systems corresponding to such'phasor' models can be e.g. INTRODUCTION Most neural network models use either binary local variables or scalars combined with sigmoidal nonlinearities. Rather awkward coding schemes have to be invoked if one wants to maintain linear relations between the local signals being processed in e.g.
Programmable Synaptic Chip for Electronic Neural Networks
Moopenn, Alexander, Langenbacher, H., Thakoor, A. P., Khanna, S. K.
The matrix chip contains a programmable 32X32 array of "long channel" NMOSFET binary connection elements implemented ina 3-um bulk CMOS process. Since the neurons are kept offchip, the synaptic chip serves as a "cascadable" building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silcon area. The performance of a synaptic chip in a 32-neuron breadboard system in an associative memory test application is discussed. INTRODUCTION The highly parallel and distributive architecture of neural networks offers potential advantages in fault-tolerant and high speed associative information processing.
A Method for Evaluating Candidate Expert System Applications
Slagle, James, Wick, Michael R.
We built on previous work to develop an evaluation method that can be used to select expert system applications which are most likely to be successfully implemented. Both essential and desirable features of an expert system application are discussed. Essential features are used to ensure that the application does not require technology beyond the state of the art. Advice on helpful directions for evaluating candidate expert system applications is also given.
Foundations and Grand Challenges of Artificial Intelligence: AAAI Presidential Address
AAAI is a society devoted to supporting the progress in science, technology and applications of AI. I thought I would use this occasion to share with you some of my thoughts on the recent advances in AI, the insights and theoretical foundations that have emerged out of the past thirty years of stable, sustained, systematic explorations in our field, and the grand challenges motivating the research in our field.
Uncertainty in Artificial Intelligence
The workshop featured significant developments in application of theories of representation and reasoning under uncertainty. The effectiveness of these choices in AI systems tends to be best considered in terms of specific problem areas. Influence diagrams are emerging as a unifying representation, enabling tool development. Interest and results in uncertainty in AI are growing beyond the capacity of a workshop format.
Review of How Machines Think: A General Introduction to Artificial Intelligence Illustrated in Prolog
Nigel Ford's book purports to be both an introduction to AI and an examination of whether machines are cognizant entities.With this pairing, Ford intends to begin at the beginning, answering the question "what is AI?" and to proceed to his main thesis about whether machines can think. Unfortunately, Ford is unable to move on to the higher plane of his main thesis.