Goto

Collaborating Authors

 Goodman, Rodney M.


Bach in a Box - Real-Time Harmony

Neural Information Processing Systems

The learning and inferencing algorithms presented here speak an extended form of the classical figured bass representation common in Bach's time. Paired with a melody, figured bass provides a sufficient amount of information to reconstruct the harmonic content of a piece of music. Figured bass has several characteristics which make it well-disposed to learning rules. It is a symbolic format which uses a relatively small alphabet of symbols. It is also hierarchical - it specifies first the chord function that is to be played at the current note/timestep, then the scale step to be played by the bass voice, then additional information as needed to specify the alto and tenor scale steps. This allows our algorithm to fire sets of rules sequentially, to first determine the chord function which should be associated with a new melody note, and then to use that chord function as an input attribute to subsequent rulebases which determine the bass, alto, and tenor scale steps. In this way we can build up the final chord from simpler pieces, each governed by a specialized rulebase.



Probability Estimation from a Database Using a Gibbs Energy Model

Neural Information Processing Systems

We present an algorithm for creating a neural network which produces accurate probability estimates as outputs. The network implements a Gibbs probability distribution model of the training database. This model is created by a new transformation relating the joint probabilities of attributes in the database to the weights (Gibbs potentials) of the distributed network model. The theory of this transformation is presented together with experimental results. One advantage of this approach is the network weights are prescribed without iterative gradient descent. Used as a classifier the network tied or outperformed published results on a variety of databases.


Learning Fuzzy Rule-Based Neural Networks for Control

Neural Information Processing Systems

First, the membership functions and an initial rule representation are learned; second, the rules are compressed as much as possible using information theory; and finally, a computational networkis constructed to compute the function value. This system is applied to two control examples: learning the truck and trailer backer-upper control system, and learning a cruise control systemfor a radio-controlled model car. 1 Introduction Function approximation is the problem of estimating a function from a set of examples ofits independent variables and function value. If there is prior knowledge of the type of function being learned, a mathematical model of the function can be constructed and the parameters perturbed until the best match is achieved. However, ifthere is no prior knowledge of the function, a model-free system such as a neural network or a fuzzy system may be employed to approximate an arbitrary nonlinear function. A neural network's inherent parallel computation is efficient for speed; however, the information learned is expressed only in the weights of the network. The advantage of fuzzy systems over neural networks is that the information learnedis expressed in terms of linguistic rules. In this paper, we propose a method for learning a complete fuzzy system to approximate example data.


Probability Estimation from a Database Using a Gibbs Energy Model

Neural Information Processing Systems

We present an algorithm for creating a neural network which produces accurateprobability estimates as outputs. The network implements aGibbs probability distribution model of the training database. This model is created by a new transformation relating the joint probabilities of attributes in the database to the weights (Gibbs potentials) of the distributed network model. The theory of this transformation is presented together with experimental results. Oneadvantage of this approach is the network weights are prescribed without iterative gradient descent. Used as a classifier the network tied or outperformed published results on a variety of databases.


Learning Fuzzy Rule-Based Neural Networks for Control

Neural Information Processing Systems

A three-step method for function approximation with a fuzzy system is proposed. First, the membership functions and an initial rule representation are learned; second, the rules are compressed as much as possible using information theory; and finally, a computational network is constructed to compute the function value. This system is applied to two control examples: learning the truck and trailer backer-upper control system, and learning a cruise control system for a radio-controlled model car. 1 Introduction Function approximation is the problem of estimating a function from a set of examples of its independent variables and function value. If there is prior knowledge of the type of function being learned, a mathematical model of the function can be constructed and the parameters perturbed until the best match is achieved. However, if there is no prior knowledge of the function, a model-free system such as a neural network or a fuzzy system may be employed to approximate an arbitrary nonlinear function. A neural network's inherent parallel computation is efficient for speed; however, the information learned is expressed only in the weights of the network. The advantage of fuzzy systems over neural networks is that the information learned is expressed in terms of linguistic rules. In this paper, we propose a method for learning a complete fuzzy system to approximate example data.


VLSI Implementation of a High-Capacity Neural Network Associative Memory

Neural Information Processing Systems

In this paper we describe the VLSI design and testing of a high capacity associative memory which we call the exponential correlation associative memory (ECAM). The prototype 3J.'-CMOS programmable chip is capable of storing 32 memory patterns of 24 bits each. The high capacity of the ECAM is partly due to the use of special exponentiation neurons, which are implemented via sub-threshold MOS transistors in this design. The prototype chip is capable of performing one associative recall in 3 J.'S.


VLSI Implementation of a High-Capacity Neural Network Associative Memory

Neural Information Processing Systems

In this paper we describe the VLSI design and testing of a high capacity associative memory which we call the exponential correlation 3J.'-CMOSassociative memory (ECAM). The prototype programmable chip is capable of storing 32 memory patterns of 24 bits each. The high capacity of the ECAM is partly due to the use of special exponentiation neurons, which are implemented via MOS transistors in this design. The prototype chipsub-threshold of performing one associative recall in 3 J.'S.is capable 1 ARCHITECTURE Previously (Chiueh, 1989), we have proposed a general model for correlation-based associative memories, which includes a variant of the Hopfield memory and highorder correlation memories as special cases. This new exponential correlation associative (ECAM) possesses a very large storage capacity, which scalesmemory exponentially with the length of memory patterns (Chiueh, 1988).


An Information Theoretic Approach to Rule-Based Connectionist Expert Systems

Neural Information Processing Systems

We discuss in this paper architectures for executing probabilistic rule-bases in a parallel manner, using as a theoretical basis recently introduced information-theoretic models. We will begin by describing our (non-neural) learning algorithm and theory of quantitative rule modelling, followed by a discussion on the exact nature of two particular models. Finally we work through an example of our approach, going from database to rules to inference network, and compare the network's performance with the theoretical limits for specific problems.


An Information Theoretic Approach to Rule-Based Connectionist Expert Systems

Neural Information Processing Systems

We discuss in this paper architectures for executing probabilistic rule-bases in a parallel manner,using as a theoretical basis recently introduced information-theoretic models. We will begin by describing our (non-neural) learning algorithm and theory of quantitative rule modelling, followed by a discussion on the exact nature of two particular models. Finally we work through an example of our approach, going from database to rules to inference network, and compare the network's performance with the theoretical limits for specific problems.