Goto

Collaborating Authors

 latent structure


InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations

Neural Information Processing Systems

The goal of imitation learning is to mimic expert behavior without access to an explicit reward signal. Expert demonstrations provided by humans, however, often show significant variability due to latent factors that are typically not explicitly modeled. In this paper, we propose a new algorithm that can infer the latent structure of expert demonstrations in an unsupervised way. Our method, built on top of Generative Adversarial Imitation Learning, can not only imitate complex behaviors, but also learn interpretable and meaningful representations of complex behavioral data, including visual demonstrations. In the driving domain, we show that a model learned from human demonstrations is able to both accurately reproduce a variety of behaviors and accurately anticipate human actions using raw visual inputs. Compared with various baselines, our method can better capture the latent structure underlying expert demonstrations, often recovering semantically meaningful factors of variation in the data.


Deep Dynamic Poisson Factorization Model

Neural Information Processing Systems

A new model, named as deep dynamic poisson factorization model, is proposed in this paper for analyzing sequential count vectors. The model based on the Poisson Factor Analysis method captures dependence among time steps by neural networks, representing the implicit distributions. Local complicated relationship is obtained from local implicit distribution, and deep latent structure is exploited to get the long-time dependence. Variational inference on latent variables and gradient descent based on the loss functions derived from variational distribution is performed in our inference. Synthetic datasets and real-world datasets are applied to the proposed model and our results show good predicting and fitting performance with interpretable latent structure.


Deep Poisson gamma dynamical systems

Neural Information Processing Systems

We develop deep Poisson-gamma dynamical systems (DPGDS) to model sequentially observed multivariate count data, improving previously proposed models by not only mining deep hierarchical latent structure from the data, but also capturing both first-order and long-range temporal dependencies. Using sophisticated but simple-to-implement data augmentation techniques, we derived closed-form Gibbs sampling update equations by first backward and upward propagating auxiliary latent counts, and then forward and downward sampling latent variables. Moreover, we develop stochastic gradient MCMC inference that is scalable to very long multivariate count time series. Experiments on both synthetic and a variety of real-world data demonstrate that the proposed model not only has excellent predictive performance, but also provides highly interpretable multilayer latent structure to represent hierarchical and temporal information propagation.


7b39f4512a2e3899edcc59c7501f3cd4-Paper-Conference.pdf

Neural Information Processing Systems

The LDS model is built on the state-space model and assumes latent factors evolvewith linear dynamics. Ontheother hand, GPFAmodels thelatent vectors by non-parametric Gaussian processes.






Identifyingsignalandnoisestructureinneural populationactivitywithGaussianprocessfactor models

Neural Information Processing Systems

Neural datasets often contain measurements of neural activity across multiple trials of a repeated stimulus or behavior. An important problem in the analysis ofsuch datasets istocharacterizesystematic aspects ofneural activity that carry information about the repeated stimulus or behavior of interest, which can be considered "signal", and to separate them from the trial-to-trial fluctuations in activity that are not time-locked to the stimulus, which for purposes of such analyses can be considered "noise". Gaussian Process factor models provide a powerful tool for identifying shared structure in high-dimensional neural data.