Goto

Collaborating Authors

 Country


An Optimization Network for Matrix Inversion

Neural Information Processing Systems

Box 150, Cheongryang, Seoul, Korea ABSTRACT Inverse matrix calculation can be considered as an optimization. We have demonstrated that this problem can be rapidly solved by highly interconnected simple neuron-like analog processors. A network for matrix inversion based on the concept of Hopfield's neural network was designed, and implemented with electronic hardware. With slight modifications, the network is readily applicable to solving a linear simultaneous equation efficiently. Notable features of this circuit are potential speed due to parallel processing, and robustness against variations of device parameters.


Scaling Properties of Coarse-Coded Symbol Memories

Neural Information Processing Systems

DCPS' memory scheme is a modified version of the Random Receptors method [5]. The symbol space is the set of all triples over a 25 letter alphabet. Units have fixed-size receptive fields organized as 6 x 6 x 6 subspaces. Patterns are manipulated to minimize the variance in pattern size across symbols.


Analysis of Distributed Representation of Constituent Structure in Connectionist Systems

Neural Information Processing Systems

The method allows the fully distributed representation of symbolic structures: the roles in the structures, as well as the fillers for those roles, can be arbitrarily non-local. Fully and partially localized special cases reduce to existing cases of connectionist representations of structured data; the tensor product representation generalizes these and the few existing examples of fuUy distributed representations of structures. The representation saturates gracefully as larger structures are represented; it pennits recursive construction of complex representations from simpler ones; it respects the independence of the capacities to generate and maintain multiple bindings in parallel; it extends naturally to continuous structures and continuous representational patterns; it pennits values to also serve as variables; it enables analysis of the interference of symbolic structures stored in associative memories; and it leads to characterization of optimal distributed representations of roles and a recirculation algorithm for learning them. Introduction Any model of complex infonnation processing in networks of simple processors must solve the problem of representing complex structures over network elements. Connectionist models of realistic natural language processing, for example, must employ computationally adequate representations of complex sentences. Many connectionists feel that to develop connectionist systems with the computational power required by complex tasks, distributed representations must be used: an individual processing unit must participate in the representation of multiple items, and each item must be represented as a pattern of activity of multiple processors. Connectionist models have used more or less distributed representations of more or less complex structures, but little if any general analysis of the problem of distributed representation of complex infonnation has been carried out This paper reports results of an analysis of a general method called the tensor product representation.


Connectivity Versus Entropy

Neural Information Processing Systems

Yaser S. Abu-Mostafa California Institute of Technology Pasadena, CA 91125 ABSTRACT How does the connectivity of a neural network (number of synapses per neuron) relate to the complexity of the problems it can handle (measured by the entropy)? Switching theory would suggest no relation at all, since all Boolean functions can be implemented using a circuit with very low connectivity (e.g., using two-input NAND gates). However, for a network that learns a problem from examples using a local learning rule, we prove that the entropy of the problem becomes a lower bound for the connectivity of the network. INTRODUCTION The most distinguishing feature of neural networks is their ability to spontaneously learnthe desired function from'training' samples, i.e., their ability to program themselves. Clearly, a given neural network cannot just learn any function, there must be some restrictions on which networks can learn which functions.


High Order Neural Networks for Efficient Associative Memory Design

Neural Information Processing Systems

The designed networks exhibit the desired associative memory function: perfect storage and retrieval of pieces of information and/or sequences of information of any complexity. INTRODUCTION In the field of information processing, an important class of potential applications of neural networks arises from their ability to perform as associative memories. Since the publication of J. Hopfield's seminal paper1, investigations of the storage and retrieval properties of recurrent networks have led to a deep understanding of their properties. The basic limitations of these networks are the following: - their storage capacity is of the order of the number of neurons; - they are unable to handle structured problems; - they are unable to classify non-linearly separable data. American Institute of Physics 1988 234 In order to circumvent these limitations, one has to introduce additional non-linearities. This can be done either by using "hidden", nonlinear units, or by considering multi-neuron interactions2. This paper presents learning rules for networks with multiple interactions, allowing the storage and retrieval, either of static pieces of information (autoassociative memory), or of temporal sequences (associative memory), while preventing an explosive growth of the number of synaptic coefficients. AUTOASSOCIATIVEMEMORY The problem that will be addressed in this paragraph is how to design an autoassociative memory with a recurrent (or feedback) neural network when the number p of prototypes is large as compared to the number n of neurons. We consider a network of n binary neurons, operating in a synchronous mode, with period t.


Phasor Neural Networks

Neural Information Processing Systems

ABSTRACT A novel network type is introduced which uses unit-length 2-vectors for local variables. As an example of its applications, associative memory nets are defined and their performance analyzed. Real systems corresponding to such'phasor' models can be e.g. INTRODUCTION Most neural network models use either binary local variables or scalars combined with sigmoidal nonlinearities. Rather awkward coding schemes have to be invoked if one wants to maintain linear relations between the local signals being processed in e.g.


Programmable Synaptic Chip for Electronic Neural Networks

Neural Information Processing Systems

The matrix chip contains a programmable 32X32 array of "long channel" NMOSFET binary connection elements implemented ina 3-um bulk CMOS process. Since the neurons are kept offchip, the synaptic chip serves as a "cascadable" building block for a multi-chip synaptic network as large as 512X512 in size. As an alternative to the programmable NMOSFET (long channel) connection elements, tailored thin film resistors are deposited, in series with FET switches, on some CMOS test chips, to obtain the weak synaptic connections. Although deposition and patterning of the resistors require additional processing steps, they promise substantial savings in silcon area. The performance of a synaptic chip in a 32-neuron breadboard system in an associative memory test application is discussed. INTRODUCTION The highly parallel and distributive architecture of neural networks offers potential advantages in fault-tolerant and high speed associative information processing.



A Method for Evaluating Candidate Expert System Applications

AI Magazine

Second, the problem domain of the used be as good as possible. Two The application task requires little task is stable. This means that the characteristics of the domain expert or no common sense. Although domain should be well established can help determine the degree of researchers are continuing to study and unlikely to undergo vast changes expertise. First, the expert is highly the representation of commonsense during the life of the expert system respected by experienced people in the knowledge, no practical systems have project. This stability does not require domain field. Because the goal of the been developed to date (Lenat, that the problem-solving process project is often to simulate the Prakash, and Shepherd 1986). A problem required to perform the task be well expert's performance, this expert requiring common sense on the understood, simply that the basics of should be viewed by others as a genuine part of the expert should be left to a the task domain be established.


Uncertainty in Artificial Intelligence

AI Magazine

The Fourth Uncertainty in Artificial Intelligence workshop was held 19-21 August 1988. The workshop featured significant developments in application of theories of representation and reasoning under uncertainty. A recurring idea at the workshop was the need to examine uncertainty calculi in the context of choosing representation, inference, and control methodologies. The effectiveness of these choices in AI systems tends to be best considered in terms of specific problem areas. These areas include automated planning, temporal reasoning, computer vision, medical diagnosis, fault detection, text analysis, distributed systems, and behavior of nonlinear systems. Influence diagrams are emerging as a unifying representation, enabling tool development. Interest and results in uncertainty in AI are growing beyond the capacity of a workshop format.