Goto

Collaborating Authors

 Country


Capacity and Information Efficiency of a Brain-like Associative Net

Neural Information Processing Systems

Bruce Graham and David Willshaw Centre for Cognitive Science, University of Edinburgh 2 Buccleuch Place, Edinburgh, EH8 9LW, UK Email: bruce@cns.ed.ac.uk&david@cns.ed.ac.uk Abstract We have determined the capacity and information efficiency of an associative net configured in a brain-like way with partial connectivity andnoisy input cues. Recall theory was used to calculate the capacity when pattern recall is achieved using a winners-takeall strategy.Transforming the dendritic sum according to input activity and unit usage can greatly increase the capacity of the associative net under these conditions. This corresponds to the level of connectivity commonly seen in the brain and invites speculation that the brain is connected in the most information efficient way. 1 INTRODUCTION Standard network associative memories become more plausible as models of associative memoryin the brain if they incorporate (1) partial connectivity, (2) sparse activity and (3) recall from noisy cues. In this paper we consider the capacity of a binary associative net (Willshaw, Buneman, & Longuet-Higgins, 1969; Willshaw, 1971; Buckingham, 1991) containing these features. While the associative net is a very simple model of associative memory, its behaviour as a storage device is not trivial and yet it is tractable to theoretical analysis.


A Charge-Based CMOS Parallel Analog Vector Quantizer

Neural Information Processing Systems

We present an analog VLSI chip for parallel analog vector quantization. TheMOSIS 2.0 J..Lm double-poly CMOS Tiny chip contains an array of 16 x 16 charge-based distance estimation cells, implementing a mean absolute difference (MAD) metric operating on a 16-input analog vector field and 16 analog template vectors.


Predicting the Risk of Complications in Coronary Artery Bypass Operations using Neural Networks

Neural Information Processing Systems

MLP networks provided slightly better risk prediction than conventional logistic regression when used to predict the risk of death, stroke, and renal failure on 1257 patients who underwent coronaryartery bypass operations. Bootstrap sampling was required to compare approaches and regularization provided by early stopping was an important component of improved performance. A simplified approach to generating confidence intervals for MLP risk predictions using an auxiliary "confidence MLP" was also developed. The confidence MLP is trained to reproduce the confidence bounds that were generated during training by 50 MLP networks trained using bootstrap samples. Current research is validating these results usinglarger data sets, exploring approaches to detect outlier patients who are so different fromany training patient that accurate risk prediction is suspect, developing approaches toexplaining which input features are important for an individual patient, and determining why MLP networks provide improved performance.



PCA-Pyramids for Image Compression

Neural Information Processing Systems

First, we show that we can use neural networks in a pyramidal framework,yielding the so-called PCA pyramids. Then we present an image compression method based on the PCA pyramid, which is similar to the Laplace pyramid and wavelet transform. Some experimental results with real images are reported. Finally, we present a method to combine the quantization step with the learning of the PCA pyramid. 1 Introduction In the past few years, a lot of work has been done on using neural networks for image compression, d .


New Algorithms for 2D and 3D Point Matching: Pose Estimation and Correspondence

Neural Information Processing Systems

A fundamental open problem in computer vision-determining pose and correspondence between two sets of points in spaceis solvedwith a novel, robust and easily implementable algorithm. The technique works on noisy point sets that may be of unequal sizes and may differ by nonrigid transformations. A 2D variation calculatesthe pose between point sets related by an affine transformation-translation, rotation, scale and shear. A 3D to 3D variation calculates translation and rotation. An objective describing theproblem is derived from Mean field theory. The objective is minimized with clocked (EMlike) dynamics. Experiments with both handwritten and synthetic data provide empirical evidence for the method. 1 Introduction


Unsupervised Classification of 3D Objects from 2D Views

Neural Information Processing Systems

Satoshi Suzuki Hiroshi Ando ATR Human Information Processing Research Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan satoshi@hip.atr.co.jp, ando@hip.atr.co.jp Abstract This paper presents an unsupervised learning scheme for categorizing 3D objects from their 2D projected images. The scheme exploits an auto-associative network's ability to encode each view of a single object into a representation that indicates its view direction. We propose two models that employ different classification mechanisms; the first model selects an auto-associative network whose recovered view best matches the input view, and the second model is based on a modular architecture whose additional network classifies the views by splitting the input space nonlinearly. We demonstrate the effectiveness of the proposed classification models through simulations using 3D wire-frame objects. 1 INTRODUCTION The human visual system can recognize various 3D (three-dimensional) objects from their 2D (two-dimensional) retinal images although the images vary significantly as the viewpoint changes. Recent computational models have explored how to learn to recognize 3D objects from their projected views (Poggio & Edelman, 1990).


A Mixture Model System for Medical and Machine Diagnosis

Neural Information Processing Systems

Diagnosis of human disease or machine fault is a missing data problem since many variables are initially unknown. Additional information needs to be obtained. The j oint probability distribution of the data can be used to solve this problem. We model this with mixture models whose parameters are estimated by the EM algorithm. This gives the benefit that missing data in the database itself can also be handled correctly. The request for new information to refine the diagnosis is performed using the maximum utility principle. Since the system is based on learning it is domain independent and less labor intensive than expert systems or probabilistic networks. An example using a heart disease database is presented.


Pairwise Neural Network Classifiers with Probabilistic Outputs

Neural Information Processing Systems

Multi-class classification problems can be efficiently solved by partitioning the original problem into sub-problems involving only two classes: for each pair of classes, a (potentially small) neural network is trained using only the data of these two classes. We show how to combine the outputs of the two-class neural networks in order to obtain posterior probabilities for the class decisions. The resulting probabilistic pairwise classifier is part of a handwriting recognition system which is currently applied to check reading. We present results on real world data bases and show that, from a practical point of view, these results compare favorably to other neural network approaches.


SARDNET: A Self-Organizing Feature Map for Sequences

Neural Information Processing Systems

A self-organizing neural network for sequence classification called SARDNET is described and analyzed experimentally. SARDNET extends the Kohonen Feature Map architecture with activation retention anddecay in order to create unique distributed response patterns for different sequences. SARDNET yields extremely dense yet descriptive representations of sequential input in very few training iterations.The network has proven successful on mapping arbitrary sequencesof binary and real numbers, as well as phonemic representations of English words. Potential applications include isolated spoken word recognition and cognitive science models of sequence processing. 1 INTRODUCTION While neural networks have proved a good tool for processing static patterns, classifying sequentialinformation has remained a challenging task. The problem involves recognizing patterns in a time series of vectors, which requires forming a good internal representationfor the sequences. Several researchers have proposed extending the self-organizing feature map (Kohonen 1989, 1990), a highly successful static pattern classification method, to sequential information (Kangas 1991; Samarabandu andJakubowicz 1990; Scholtes 1991). Below, three of the most recent of these networks are briefly described. The remainder of the paper focuses on a new architecture designed to overcome the shortcomings of these approaches.