Technology
Exploiting Model Uncertainty Estimates for Safe Dynamic Control Learning
Figure 2: The task is to move the cart to the origin as quickly as possible without dropping the pole. The bottom three pictures show a trace of the policy execution obtained after one, two, and three trials (shown in increments of 0.5 seconds) Controller Number of data points used Cost from initial state 17 to build the controller LQR
Representing Face Images for Emotion Classification
Padgett, Curtis, Cottrell, Garrison W.
We compare the generalization performance of three distinct representation schemes for facial emotions using a single classification strategy (neural network). The face images presented to the classifiers are represented as: full face projections of the dataset onto their eigenvectors (eigenfaces); a similar projection constrained to eye and mouth areas (eigenfeatures); and finally a projection of the eye and mouth areas onto the eigenvectors obtained from 32x32 random image patches from the dataset. The latter system achieves 86% generalization on novel face images (individuals the networks were not trained on) drawn from a database in which human subjects consistently identify a single emotion for the face. 1 Introduction Some of the most successful research in machine perception of complex natural image objects (like faces), has relied heavily on reduction strategies that encode an object as a set of values that span the principal component subspace of the object's images [Cottrell and Metcalfe, 1991, Pentland et al., 1994]. This approach has gained wide acceptance for its success in classification, for the efficiency in which the eigenvectors can be calculated, and because the technique permits an implementation that is biologically plausible. The procedure followed in generating these face representations requires normalizing a large set of face views (" mugshots") and from these, identifying a statistically relevant subspace.
Spectroscopic Detection of Cervical Pre-Cancer through Radial Basis Function Networks
Tumer, Kagan, Ramanujam, Nirmala, Richards-Kortum, Rebecca R., Ghosh, Joydeep
The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, noninvasively and quantitatively probes the biochemical and morphological changes that occur in precancerous tissue. RBF ensemble algorithms based on such spectra provide automated, and near realtime implementation of pre-cancer detection in the hands of nonexperts. The results are more reliable, direct and accurate than those achieved by either human experts or multivariate statistical algorithms. 1 Introduction Cervical carcinoma is the second most common cancer in women worldwide, exceeded only by breast cancer (Ramanujam et al., 1996). The mortality related to cervical cancer can be reduced if this disease is detected at the precancerous state, known as squamous intraepitheliallesion (SIL). Currently, a Pap smear is used to 982 K. Turner, N. Ramanujam, R. Richards-Kortum and J. Ghosh screen for cervical cancer {Kurman et al., 1994}. In a Pap test, a large number of cells obtained by scraping the cervical epithelium are smeared onto a slide which is then fixed and stained for cytologic examination.
Reinforcement Learning for Dynamic Channel Allocation in Cellular Telephone Systems
Singh, Satinder P., Bertsekas, Dimitri P.
In cellular telephone systems, an important problem is to dynamically allocate the communication resource (channels) so as to maximize service in a stochastic caller environment. This problem is naturally formulated as a dynamic programming problem and we use a reinforcement learning (RL) method to find dynamic channel allocation policies that are better than previous heuristic solutions. The policies obtained perform well for a broad variety of call traffic patterns.
3D Object Recognition: A Model of View-Tuned Neurons
Bricolo, Emanuela, Poggio, Tomaso, Logothetis, Nikos K.
Recognition of specific objects, such as recognition of a particular face, can be based on representations that are object centered, such as 3D structural models. Alternatively, a 3D object may be represented for the purpose of recognition in terms of a set of views. This latter class of models is biologically attractive because model acquisition - the learning phase - is simpler and more natural. A simple model for this strategy of object recognition was proposed by Poggio and Edelman (Poggio and Edelman, 1990). They showed that, with few views of an object used as training examples, a classification network, such as a Gaussian radial basis function network, can learn to recognize novel views of that object, in partic- 42 E. Bricolo, T. Poggio and N. Logothetis
A Mixture of Experts Classifier with Learning Based on Both Labelled and Unlabelled Data
Miller, David J., Uyar, Hasan S.
We address statistical classifier design given a mixed training set consisting of a small labelled feature set and a (generally larger) set of unlabelled features. This situation arises, e.g., for medical images, where although training features may be plentiful, expensive expertise is required to extract their class labels. We propose a classifier structure and learning algorithm that make effective use of unlabelled data to improve performance. The learning is based on maximization of the total data likelihood, i.e. over both the labelled and unlabelled data subsets. Two distinct EM learning algorithms are proposed, differing in the EM formalism applied for unlabelled data. The classifier, based on a joint probability model for features and labels, is a "mixture of experts" structure that is equivalent to the radial basis function (RBF) classifier, but unlike RBFs, is amenable to likelihood-based training. The scope of application for the new method is greatly extended by the observation that test data, or any new data to classify, is in fact additional, unlabelled data - thus, a combined learning/classification operation - much akin to what is done in image segmentation - can be invoked whenever there is new data to classify. Experiments with data sets from the UC Irvine database demonstrate that the new learning algorithms and structure achieve substantial performance gains over alternative approaches.
Bangs, Clicks, Snaps, Thuds and Whacks: An Architecture for Acoustic Transient Processing
Pineda, Fernando J., Cauwenberghs, Gert, Edwards, R. Timothy
We report progress towards our long-term goal of developing low-cost, low-power, lowcomplexity analog-VLSI processors for real-time applications. We propose a neuromorphic architecture for acoustic processing in analog VLSI. The characteristics of the architecture are explored by using simulations and real-world acoustic transients. We use acoustic transients in our experiments because information in the form of acoustic transients pervades the natural world. Insects, birds, and mammals (especially marine mammals) all employ acoustic signals with rich transient structure.
Statistical Mechanics of the Mixture of Experts
The mixture of experts [1, 2] is a well known example which implements the philosophy of divide-and-conquer elegantly. Whereas this model are gaining more popularity in various applications, there have been little efforts to evaluate generalization capability of these modular approaches theoretically. Here we present the first analytic study of generalization in the mixture of experts from the statistical 184 K. Kang and 1. Oh physics perspective. Use of statistical mechanics formulation have been focused on the study of feedforward neural network architectures close to the multilayer perceptron[5, 6], together with the VC theory[8]. We expect that the statistical mechanics approach can also be effectively used to evaluate more advanced architectures including mixture models.
Genetic Algorithms and Explicit Search Statistics
The genetic algorithm (GA) is a heuristic search procedure based on mechanisms abstracted from population genetics. In a previous paper [Baluja & Caruana, 1995], we showed that much simpler algorithms, such as hillcIimbing and Population Based Incremental Learning (PBIL), perform comparably to GAs on an optimization problem custom designed to benefit from the GA's operators. This paper extends these results in two directions. First, in a large-scale empirical comparison of problems that have been reported in GA literature, we show that on many problems, simpler algorithms can perform significantly better than GAs. Second, we describe when crossover is useful, and show how it can be incorporated into PBIL. 1 IMPLICIT VS.