Information Technology
Task Communication Through Natural Language and Graphics
Badler, Norman, Webber, Bonnie
With increases in the complexity of information that must be communicated either by or to computer comes a corresponding need to find ways to communicate that information simply and effectively. It makes little sense to force the burden of communication on a single medium, restricted to just one of spoken or written text, gestures, diagrams, or graphical animation, when in many situations information is only communicated effectively through combinations of media.
Using Local Models to Control Movement
This paper explores the use of a model neural network for motor learning. Steinbuch and Taylor presented neural network designs to do nearest neighbor lookup in the early 1960s. In this paper their nearest neighbor network is augmented with a local model network, which fits a local model to a set of nearest neighbors. The network design is equivalent to local regression. This network architecture can represent smooth nonlinear functions, yet has simple training rules with a single global optimum.
Adjoint Operator Algorithms for Faster Learning in Dynamical Neural Networks
Barhen, Jacob, Toomarian, Nikzad Benny, Gulati, Sandeep
A methodology for faster supervised learning in dynamical nonlinear neuralnetworks is presented. It exploits the concept of adjoint operntors to enable computation of changes in the network's response dueto perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations. The lower bound on speedup per learning iteration over conventional methodsfor calculating the neuromorphic energy gradient is O(N2), where N is the number of neurons in the network. 1 INTRODUCTION The biggest promise of artifcial neural networks as computational tools lies in the hope that they will enable fast processing and synthesis of complex information patterns. In particular, considerable efforts have recently been devoted to the formulation ofefficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda, 1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989). The development of learning algorithms is generally based upon the minimization of a neuromorphic energy function.
Predicting Weather Using a Genetic Memory: A Combination of Kanerva's Sparse Distributed Memory with Holland's Genetic Algorithms
Kanerva's sparse distributed memory (SDM) is an associative-memory modelbased on the mathematical properties of high-dimensional binary address spaces. Holland's genetic algorithms are a search technique forhigh-dimensional spaces inspired by evolutionary processes of DNA. "Genetic Memory" is a hybrid of the above two systems, in which the memory uses a genetic algorithm to dynamically reconfigure itsphysical storage locations to reflect correlations between the stored addresses and data. For example, when presented with raw weather station data, the Genetic Memory discovers specific features inthe weather data which correlate well with upcoming rain, and reconfigures the memory to utilize this information effectively. This architecture is designed to maximize the ability of the system to scale-up to handle real-world problems.
Using a Translation-Invariant Neural Network to Diagnose Heart Arrhythmia
Distinctive electrocardiogram (EeG) patterns are created when the heart is beating normally and when a dangerous arrhythmia is present. Some devices which monitor the EeG and react to arrhythmias parameterize the ECG signal and make a diagnosis based on the parameters. The author discusses the use of a neural network to classify the EeG signals directly.
A self-organizing multiple-view representation of 3D objects
Weinshall, Daphna, Edelman, Shimon, Bülthoff, Heinrich H.
We demonstrate the ability of a two-layer network of thresholded summation units to support representation of 3D objects in which several distinct 2D views are stored for ea.ch object. Using unsupervised Hebbianrelaxation, the network learned to recognize ten objects from different viewpoints. The training process led to the emergence of compact representations of the specific input views. When tested on novel views of the same objects, the network exhibited asubstantial generalization capability. In simulated psychophysical experiments,the network's behavior was qualitatively similar to that of human subjects.
Operational Fault Tolerance of CMAC Networks
Carter, Michael J., Rudolph, Franklin J., Nucci, Adam J.
The performance sensitivity of Albus' CMAC network was studied for the scenario in which faults are introduced into the adjustable weights after training has been accomplished. It was found that fault sensitivity was reduced with increased generalization when "loss of weight" faults were considered, but sensitivity was increased for "saturated weight" faults. 1 INTRODUCTION Fault-tolerance is often cited as an inherent property of neural networks, and is thought by many to be a natural consequence of "massively parallel" computational architectures. Numerous anecdotal reports of fault-tolerance experiments, primarily in pattern classification tasks, abound in the literature. However, there has been surprisingly little rigorous investigation of the fault-tolerance properties of various network architectures in other application areas. In this paper we investigate the fault-tolerance of the CMAC (Cerebellar Model Arithmetic Computer) network [Albus 1975] in a systematic manner. CMAC networks have attracted much recent attention because of their successful application in robotic manipulator control [Ersu 1984, Miller 1986, Lane 1988]. Since fault-tolerance is a key concern in critical control tasks, there is added impetus to study Operational Fault Tolerance of CMAC Networks 341 this aspect of CMAC performance.
Maximum Likelihood Competitive Learning
One popular class of unsupervised algorithms are competitive algorithms. Inthe traditional view of competition, only one competitor, the winner, adapts for any given case. I propose to view competitive adaptationas attempting to fit a blend of simple probability generators (such as gaussians) to a set of data-points. The maximum likelihoodfit of a model of this type suggests a "softer" form of competition, in which all competitors adapt in proportion to the relative probability that the input came from each competitor. I investigate one application of the soft competitive model, placement ofradial basis function centers for function interpolation, and show that the soft model can give better performance with little additional computational cost. 1 INTRODUCTION Interest in unsupervised learning has increased recently due to the application of more sophisticated mathematical tools (Linsker, 1988; Plumbley and Fallside, 1988; Sanger, 1989) and the success of several elegant simulations of large scale selforganization (Linsker,1986; Kohonen, 1982). One popular class of unsupervised algorithms are competitive algorithms, which have appeared as components in a variety of systems (Von der Malsburg, 1973; Fukushima, 1975; Grossberg, 1978). Generalizing the definition of Rumelhart and Zipser (1986), a competitive adaptive system consists of a collection of modules which are structurally identical except, possibly, for random initial parameter variation.