Plotting

 Noah Goodman


Bias and Generalization in Deep Generative Models: An Empirical Study

Neural Information Processing Systems

In high dimensional settings, density estimation algorithms rely crucially on their inductive bias. Despite recent empirical success, the inductive bias of deep generative models is not well understood. In this paper we propose a framework to systematically investigate bias and generalization in deep generative models of images. Inspired by experimental methods from cognitive psychology, we probe each learning algorithm with carefully designed training datasets to characterize when and how existing models generate novel attributes and their combinations. We identify similarities to human psychology and verify that these patterns are consistent across commonly used models and architectures.


Multimodal Generative Models for Scalable Weakly-Supervised Learning

Neural Information Processing Systems

Learning a joint representation of these modalities should yield deeper and more useful representations. Previous generative approaches to multi-modal input either do not learn a joint distribution or require additional computation to handle missing data. Here, we introduce a multimodal variational autoencoder (MVAE) that uses a product-of-experts inference network and a sub-sampled training paradigm to solve the multi-modal inference problem. Notably, our model shares parameters to efficiently learn under any combination of missing modalities. We apply the MVAE on four datasets and match state-of-the-art performance using many fewer parameters. In addition, we show that the MVAE is directly applicable to weaklysupervised learning, and is robust to incomplete supervision. We then consider two case studies, one of learning image transformations--edge detection, colorization, segmentation--as a set of modalities, followed by one of machine translation between two languages. We find appealing results across this range of tasks.


Neurally-Guided Procedural Models: Amortized Inference for Procedural Graphics Programs using Neural Networks

Neural Information Processing Systems

Probabilistic inference algorithms such as Sequential Monte Carlo (SMC) provide powerful tools for constraining procedural models in computer graphics, but they require many samples to produce desirable results. In this paper, we show how to create procedural models which learn how to satisfy constraints. We augment procedural models with neural networks which control how the model makes random choices based on the output it has generated thus far. We call such models neurally-guided procedural models. As a pre-computation, we train these models to maximize the likelihood of example outputs generated via SMC. They are then used as efficient SMC importance samplers, generating high-quality results with very few samples. We evaluate our method on L-system-like models with imagebased constraints. Given a desired quality threshold, neurally-guided models can generate satisfactory results up to 10x faster than unguided models.



Bias and Generalization in Deep Generative Models: An Empirical Study

Neural Information Processing Systems

In high dimensional settings, density estimation algorithms rely crucially on their inductive bias. Despite recent empirical success, the inductive bias of deep generative models is not well understood. In this paper we propose a framework to systematically investigate bias and generalization in deep generative models of images. Inspired by experimental methods from cognitive psychology, we probe each learning algorithm with carefully designed training datasets to characterize when and how existing models generate novel attributes and their combinations. We identify similarities to human psychology and verify that these patterns are consistent across commonly used models and architectures.


Multimodal Generative Models for Scalable Weakly-Supervised Learning

Neural Information Processing Systems

Learning a joint representation of these modalities should yield deeper and more useful representations. Previous generative approaches to multi-modal input either do not learn a joint distribution or require additional computation to handle missing data. Here, we introduce a multimodal variational autoencoder (MVAE) that uses a product-of-experts inference network and a sub-sampled training paradigm to solve the multi-modal inference problem. Notably, our model shares parameters to efficiently learn under any combination of missing modalities. We apply the MVAE on four datasets and match state-of-the-art performance using many fewer parameters. In addition, we show that the MVAE is directly applicable to weaklysupervised learning, and is robust to incomplete supervision. We then consider two case studies, one of learning image transformations--edge detection, colorization, segmentation--as a set of modalities, followed by one of machine translation between two languages. We find appealing results across this range of tasks.