Not enough data to create a plot.
Try a different view from the menu above.
Country
Connectionist Models for Auditory Scene Analysis
Although the visual and auditory systems share the same basic tasks of informing an organism about its environment, most connectionist work on hearing to date has been devoted to the very different problem of speech recognition. VVe believe that the most fundamental task of the auditory system is the analysis of acoustic signals into components corresponding to individual sound sources, which Bregman has called auditory scene analysis. Computational and connectionist work on auditory scene analysis is reviewed, and the outline of a general model that includes these approaches is described.
Convergence of Indirect Adaptive Asynchronous Value Iteration Algorithms
Gullapalli, Vijaykumar, Barto, Andrew G.
Reinforcement Learning methods based on approximating dynamic programming (DP) are receiving increased attention due to their utility in forming reactive control policies for systems embedded in dynamic environments. Environments are usually modeled as controlled Markov processes, but when the environment model is not known a priori, adaptive methods are necessary. Adaptive control methods are often classified as being direct or indirect. Direct methods directly adapt the control policy from experience, whereas indirect methods adapt a model of the controlled process and compute control policies based on the latest model. Our focus is on indirect adaptive DPbased methods in this paper. We present a convergence result for indirect adaptive asynchronous value iteration algorithms for the case in which a lookup table is used to store the value function. Our result implies convergence of several existing reinforcement learning algorithms such as adaptive real-time dynamic programming (ARTDP) (Barto, Bradtke, & Singh, 1993) and prioritized sweeping (Moore & Atkeson, 1993). Although the emphasis of researchers studying DPbased reinforcement learning has been on direct adaptive methods such as Q-Learning (Watkins, 1989) and methods using TD algorithms (Sutton, 1988), it is not clear that these direct methods are preferable in practice to indirect methods such as those analyzed in this paper.
Implementing Intelligence on Silicon Using Neuron-Like Functional MOS Transistors
Shibata, Tadashi, Kotani, Koji, Yamashita, Takeo, Ishii, Hiroshi, Kosaka, Hideo, Ohmi, Tadahiro
We will present the implementation of intelligent electronic circuits realized for the first time using a new functional device called Neuron MOS Transistor (neuMOS or vMOS in short) simulating the behavior of biological neurons at a single transistor level. Search for the most resembling data in the memory cell array, for instance, can be automatically carried out on hardware without any software manipulation. Soft Hardware, which we named, can arbitrarily change its logic function in real time by external control signals without any hardware modification. Implementation of a neural network equipped with an on-chip self-learning capability is also described. Through the studies of vMOS intelligent circuit implementation, we noticed an interesting similarity in the architectures of vMOS logic circuitry and biological systems.
Learning in Computer Vision and Image Understanding
There is an increasing interest in the area of Learning in Computer Vision and Image Understanding, both from researchers in the learning community and from researchers involved with the computer vision world. The field is characterized by a shift away from the classical, purely model-based, computer vision techniques, towards data-driven learning paradigms for solving real-world vision problems. Using learning in segmentation or recognition tasks has several advantages over classical model-based techniques. These include adaptivity to noise and changing environments, as well as in many cases, a simplified system generation procedure. Yet, learning from examples introduces a new challenge - getting a representative data set of examples from which to learn.
Classification of Electroencephalogram using Artificial Neural Networks
Tsoi, A C, So, D S C, Sergejew, A
In this paper, we will consider the problem of classifying electroencephalogram (EEG) signals of normal subjects, and subjects suffering from psychiatric disorder, e.g., obsessive compulsive disorder, schizophrenia, using a class of artificial neural networks, viz., multi-layer perceptron. It is shown that the multilayer perceptron is capable of classifying unseen test EEG signals to a high degree of accuracy.
Resolving motion ambiguities
Diamantaras, K. I., Geiger, D.
We address the problem of optical flow reconstruction and in particular the problem of resolving ambiguities near edges. They occur due to (i) the aperture problem and (ii) the occlusion problem, where pixels on both sides of an intensity edge are assigned the same velocity estimates (and confidence). However, these measurements are correct for just one side of the edge (the non occluded one). Our approach is to introduce an uncertamty field with respect to the estimates and confidence measures. We note that the confidence measures are large at intensity edges and larger at the convex sides of the edges, i.e. inside corners, than at the concave side. We resolve the ambiguities through local interactions via coupled Markov random fields (MRF). The result is the detection of motion for regions of images with large global convexity.
Robust Parameter Estimation and Model Selection for Neural Network Regression
In this paper, it is shown that the conventional back-propagation (BPP) algorithm for neural network regression is robust to leverages (data with:n corrupted), but not to outliers (data with y corrupted). A robust model is to model the error as a mixture of normal distribution. The influence function for this mixture model is calculated and the condition for the model to be robust to outliers is given. EM algorithm [5] is used to estimate the parameter. The usefulness of model selection criteria is also discussed.
Synchronization, oscillations, and 1/f noise in networks of spiking neurons
Stemmler, Martin, Usher, Marius, Koch, Christof, Olami, Zeev
The model consists of a two-dimensional sheet of leaky integrateand-fire neurons with feedback connectivity consisting of local excitation and surround inhibition. Each neuron is independently driven by homogeneous external noise. Spontaneous symmetry breaking occurs, resulting in the formation of "hotspots" of activity in the network. These localized patterns of excitation appear as clusters that coalesce, disintegrate, or fluctuate in size while simultaneously moving in a random walk constrained by the interaction with other clusters. The emergent cross-correlation functions have a dual structure, with a sharp peak around zero on top of a much broader hill.
Optimal Unsupervised Motor Learning Predicts the Internal Representation of Barn Owl Head Movements
This implies the existence of a set of orthogonal internal coordinates that are related to meaningful coordinates of the external world. No coherent computational theory has yet been proposed to explain this finding. I have proposed a simple model which provides a framework for a theory of low-level motor learning. I show that the theory predicts the observed microstimulation results in the barn owl. The model rests on the concept of "Optimal U n supervised Motor Learning", which provides a set of criteria that predict optimal internal representations. I describe two iterative Neural Network algorithms which find the optimal solution and demonstrate possible mechanisms for the development of internal representations in animals. 1 INTRODUCTION In the sensory domain, many algorithms for unsupervised learning have been proposed. These algorithms learn depending on statistical properties of the input data, and often can be used to find useful "intermediate" sensory representations
Supervised learning from incomplete data via an EM approach
Ghahramani, Zoubin, Jordan, Michael I.
Real-world learning tasks may involve high-dimensional data sets with arbitrary patterns of missing data. In this paper we present a framework based on maximum likelihood density estimation for learning from such data set.s. VVe use mixture models for the density estimates and make two distinct appeals to the Expectation Maximization (EM) principle (Dempster et al., 1977) in deriving a learning algorithm-EM is used both for the estimation of mixture components and for coping wit.h missing dat.a. The resulting algorithm is applicable t.o a wide range of supervised as well as unsupervised learning problems.