Goto

Collaborating Authors

 Information Technology


SEXNET: A Neural Network Identifies Sex From Human Faces

Neural Information Processing Systems

People can capably tell if a human face is male or female. Recognizing the sex of conspecifics is important. ''''hile some animals use pheromones to recognize sex, in humans this task is primarily visual. How is sex recognized from faces? By and large we are unable to say. Although certain features are nearly pathognomonic for one sex or the other (facial hair for men, makeup or certain hairstyles for women), even in the absence of these cues the determination is made; and even in their presence, other cues may override. Sex-recognition in faces is thus a. prototypical pattern recognition task of the sort at which humans excel, but which has vexed traditional AI. It appea.rs to follow no simple algorithm, and indeed is modifiable according to fashion (makeup, hair etc).


EMPATH: Face, Emotion, and Gender Recognition Using Holons

Neural Information Processing Systems

The in previ us face recognition systems (Ka Rยท na e, 19;)y' ......--.-.. scaled to the range [0 -. 8]. Part of the training set and its d' tances between facial elements.


Analog Computation at a Critical Point: A Novel Function for Neuronal Oscillations?

Neural Information Processing Systems

Static correlations among spike trains obtained from simulations of large arrays of cells are in agreement with the predictions from these Hamiltonians, and dynamic correlat.ions


Learning Trajectory and Force Control of an Artificial Muscle Arm by Parallel-hierarchical Neural Network Model

Neural Information Processing Systems

We propose a new parallel-hierarchical neural network model to enable motor learning for simultaneous control of both trajectory and force.


Basis-Function Trees as a Generalization of Local Variable Selection Methods for Function Approximation

Neural Information Processing Systems

Function approximation on high-dimensional spaces is often thwarted by a lack of sufficient data to adequately "fill" the space, or lack of sufficient computational resources. The technique of local variable selection provides a partial solution to these problems by attempting to approximate functions locally using fewer than the complete set of input dimensions.


Constructing Hidden Units using Examples and Queries

Neural Information Processing Systems

While the network loading problem for 2-layer threshold nets is NPhard when learning from examples alone (as with backpropagation), (Baum, 91) has now proved that a learner can employ queries to evade the hidden unit credit assignment problem and PACload nets with up to four hidden units in polynomial time. Empirical tests show that the method can also learn far more complicated functions such as randomly generated networks with 200 hidden units. The algorithm easily approximates Wieland's 2-spirals function using a single layer of 50 hidden units, and requires only 30 minutes of CPU time to learn 200-bit parity to 99.7% accuracy.


A Theory for Neural Networks with Time Delays

Neural Information Processing Systems

We present a new neural network model for processing of temporal patterns. This model, the gamma neural model, is as general as a convolution delay model with arbitrary weight kernels w(t). We show that the gamma model can be formulated as a (partially prewired) additive model. A temporal hebbian learning rule is derived and we establish links to related existing models for temporal processing. 1 INTRODUCTION In this paper, we are concerned with developing neural nets with short term memory for processing of temporal patterns. In the literature, basically two ways have been reported to incorporate short-term memory in the neural system equations.


Spherical Units as Dynamic Consequential Regions: Implications for Attention, Competition and Categorization

Neural Information Processing Systems

Spherical Units can be used to construct dynamic reconfigurable consequential regions, the geometric bases for Shepard's (1987) theory of stimulus generalization in animals and humans. We derive from Shepard's (1987) generalization theory a particular multi-layer network with dynamic (centers and radii) spherical regions which possesses a specific mass function (Cauchy). This learning model generalizes the configural-cue network model (Gluck & Bower 1988): (1) configural cues can be learned and do not require pre-wiring the power-set of cues, (2) Consequential regions are continuous rather than discrete and (3) Competition amoungst receptive fields is shown to be increased by the global extent of a particular mass function (Cauchy). We compare other common mass functions (Gaussian; used in models of Moody & Darken; 1989, Krushke, 1990) or just standard backpropogation networks with hyperplane/logistic hidden units showing that neither fare as well as models of human generalization and learning.


Qualitative structure from motion

Neural Information Processing Systems

I have presented a qualitative approach to the problem of recovering object structure from motion information and discussed some of its computational, psychophysical and implementational aspects. The computation of qualitative shape, as represented by the sign of the Gaussian curvature, can be performed by a field of simple operators, in parallel over the entire image. The performance of a qualitative shape detection module, implemented by an artificial neural network, appears to be similar to the performance of human subjects in an identical task.


Continuous Speech Recognition by Linked Predictive Neural Networks

Neural Information Processing Systems

We present a large vocabulary, continuous speech recognition system based on Linked Predictive Neural Networks (LPNN's). The system uses neural networks as predictors of speech frames, yielding distortion measures which are used by the One Stage DTW algorithm to perform continuous speech recognition. The system, already deployed in a Speech to Speech Translation system, currently achieves 95%, 58%, and 39% word accuracy on tasks with perplexity 5, 111, and 402 respectively, outperforming several simple HMMs that we tested. We also found that the accuracy and speed of the LPNN can be slightly improved by the judicious use of hidden control inputs. We conclude by discussing the strengths and weaknesses of the predictive approach.