Goto

Collaborating Authors

Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Imbalanced Data

arXiv.org Machine Learning

E LASTIC-I NFOGAN: U NSUPERVISEDD ISENTANGLED R EPRESENTATIONL EARNING IN I MBALANCEDD ATA Utkarsh Ojha 1, Krishna Kumar Singh 1, Cho-Jui Hsieh 2, and Y ong Jae Lee 1 1 University of California, Davis 2 University of California, Los Angeles A BSTRACT We propose a novel unsupervised generative model, Elastic-InfoGAN, that learns to disentangle object identity from other low-level aspects in class-imbalanced datasets. We first investigate the issues surrounding the assumptions about uniformity made by InfoGAN (Chen et al. (2016)), and demonstrate its ineffectiveness to properly disentangle object identity in imbalanced data. Our key idea is to make the discovery of the discrete latent factor of variation invariant to identity-preserving transformations in real images, and use that as the signal to learn the latent distribution's parameters. Experiments on both artificial (MNIST) and real-world (Y ouTube-Faces) datasets demonstrate the effectiveness of our approach in imbalanced data by: (i) better disentanglement of object identity as a latent factor of variation; and (ii) better approximation of class imbalance in the data, as reflected in the learned parameters of the latent distribution. Recent deep neural network based models such as Generative Adversarial Networks (Goodfellow et al. (2014); Salimans et al. (2016); Radford et al. (2016)) and V ariational Autoen-coders (Kingma & Welling (2014); Higgins et al. (2017)) have led to promising results in generating realistic samples for high-dimensional and complex data such as images. More advanced models show how to discover disentangled representations (Y an et al. (2016); Chen et al. (2016); Tran et al. (2017); Hu et al. (2018); Singh et al. (2019)), in which different latent dimensions can be made to represent independent factors of variation (e.g., pose, identity) in the data (e.g., human faces). InfoGAN (Chen et al. (2016)) in particular, tries to learn an unsupervised disentangled representation by maximizing the mutual information between the discrete or continuous latent variables and the corresponding generated samples. For discrete latent factors (e.g., digit identities), it assumes that they are uniformly distributed in the data, and approximates them accordingly using a fixed uniform categorical distribution. Although this assumption holds true for many existing benchmark datasets (e.g., MNIST LeCun (1998)), real-word data often follows a long-tailed distribution and rarely exhibits perfect balance between the categories.


Estimating Heterogeneous Consumer Preferences for Restaurants and Travel Time Using Mobile Location Data

arXiv.org Machine Learning

This paper analyzes consumer choices over lunchtime restaurants using data from a sample of several thousand anonymous mobile phone users in the San Francisco Bay Area. The data is used to identify users' approximate typical morning location, as well as their choices of lunchtime restaurants. We build a model where restaurants have latent characteristics (whose distribution may depend on restaurant observables, such as star ratings, food category, and price range), each user has preferences for these latent characteristics, and these preferences are heterogeneous across users. Similarly, each item has latent characteristics that describe users' willingness to travel to the restaurant, and each user has individual-specific preferences for those latent characteristics. Thus, both users' willingness to travel and their base utility for each restaurant vary across user-restaurant pairs. We use a Bayesian approach to estimation. To make the estimation computationally feasible, we rely on variational inference to approximate the posterior distribution, as well as stochastic gradient descent as a computational approach. Our model performs better than more standard competing models such as multinomial logit and nested logit models, in part due to the personalization of the estimates. We analyze how consumers re-allocate their demand after a restaurant closes to nearby restaurants versus more distant restaurants with similar characteristics, and we compare our predictions to actual outcomes. Finally, we show how the model can be used to analyze counterfactual questions such as what type of restaurant would attract the most consumers in a given location.


RNA Modeling Using Gibbs Sampling and Stochastic Context Free Grammars

AAAI Conferences

Leslie Grate and Mark Herbster and Richard Hughey and David Haussler Baskin (;enter for Computer Engineering and Computer and Information Sciences University of California Santa Cruz, CA 95064 Keywords: RNA secondary structure, Gibbs sampler, Expectation Maximization, stochastic contextfree grammars, hidden Markov models, tP NA, snRNA, 16S rRNA, linguistic methods Abstract A new method of discovering the common secondary structure of a family of homologous RNA sequences using Gibbs sampling and stochastic context-free grammars is proposed. These parameters describe a statistical model of the family. After the Gibbs sampling has produced a crude statistical model for the family, this model is translated into a stochastic context-free grammar, which is then refined by an Expectation Maximization (EM) procedure produce a more complete model. A prototype implementation of the method is tested on tRNA, pieces of 16S rRNA and on U5 snRNA with good results. I. Saira Mian and Harry Noller Sinsheimer Laboratories University of California Santa Cruz, CA 95064 Introduction Tools for analyzing RNA are becoming increasingly important as in vitro evolution and selection techniques produce greater numbers of synthesized RNA families to supplement those related by phylogeny. Two principal methods have been established for predicting RNA secondary structure base pairings. The second technique employs thermodynamics to compare the free energy changes predicted for formation of possible s,'covdary structure and relies on finding the structure with the lowest free energy (Tinoco Jr., Uhlenbeck, & Levine 1971: Turner, Sugimoto, & Freier 1988; *This work was supported in part by NSF grants C,I)A-9115268 and IR1-9123692, and NIIt gratnt (.;M17129. When several related sequences are available that all share a common secondary structure, combinations of different approaches have been used to obtain improved results (Waterman 1989; Le & Zuker 1991; Han& Kim 1993; Chiu & Kolodziejczak 1991; Sankoff 1985; Winker et al. 1990; Lapedes 1992; Klinger & Brutlag 1993; Gutell et aL 1992). Recent efforts have applied Stochastic Context-Free Grammars (SCFGs) to the problems of statistical modeling, multiple alignment, discrimination and prediction of the secondary structure of RNA families (Sakakibara el al. 1994; 1993; Eddy & Durbin 1994; Searls 1993).


A Comparison of Dynamic Reposing and Tangent Distance for Drug Activity Prediction

Neural Information Processing Systems

Thomas G. Dietterich Arris Pharmaceutical Corporation and Oregon State University Corvallis, OR 97331-3202 Ajay N. Jain Arris Pharmaceutical Corporation 385 Oyster Point Blvd., Suite 3 South San Francisco, CA 94080 Richard H. Lathrop and Tomas Lozano-Perez Arris Pharmaceutical Corporation and MIT Artificial Intelligence Laboratory 545 Technology Square Cambridge, MA 02139 Abstract In drug activity prediction (as in handwritten character recognition), thefeatures extracted to describe a training example depend on the pose (location, orientation, etc.) of the example. In handwritten characterrecognition, one of the best techniques for addressing thisproblem is the tangent distance method of Simard, LeCun and Denker (1993). Jain, et al. (1993a; 1993b) introduce a new technique-dynamic reposing-that also addresses this problem. Dynamicreposing iteratively learns a neural network and then reposes the examples in an effort to maximize the predicted output values.New models are trained and new poses computed until models and poses converge. This paper compares dynamic reposing to the tangent distance method on the task of predicting the biological activityof musk compounds.


How Drive.ai Is Mastering Autonomous Driving With Deep Learning

#artificialintelligence

Among all of the self-driving startups working toward Level 4 autonomy (a self-driving system that doesn't require human intervention in most scenarios), Mountain View, Calif.-based Drive.ai's Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. "If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There's so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren't learned, then you're never going to get these cars out there." It's only been about a year since Drive went public, but already, the company has a fleet of four vehicles navigating (mostly) autonomously around the San Francisco Bay Area--even in situations (such as darkness, rain, or hail) that are notoriously difficult for self-driving cars. Last month, we went out to California to take a ride in one of Drive's cars, and to find out how it's using deep learning to master autonomous driving.