Goto

Collaborating Authors

Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling

Neural Information Processing Systems

We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods. Papers published at the Neural Information Processing Systems Conference.


Introduction To Probabilistic Modeling and Machine Learning • r/datascience

#artificialintelligence

Welcome to /r/datascience, a place to discuss data, data science, becoming a data scientist, data munging, and more! If you're brand new to this subreddit and want to ask a question, please use the search functionality first before posting. This way you can search if someone has already asked your question.


Composing Modeling and Inference Operations with Probabilistic Program Combinators

arXiv.org Machine Learning

Probabilistic programs with dynamic computation graphs can define measures over sample spaces with unbounded dimensionality, and thereby constitute programmatic analogues to Bayesian nonparametrics. Owing to the generality of this model class, inference relies on "black-box" Monte Carlo methods that are generally not able to take advantage of conditional independence and exchangeability, which have historically been the cornerstones of efficient inference. We here seek to develop a "middle ground" between probabilistic models with fully dynamic and fully static computation graphs. To this end, we introduce a combinator library for the Probabilistic Torch framework. Combinators are functions that accept models and return transformed models. We assume that models are dynamic, but that model composition is static, in the sense that combinator application takes place prior to evaluating the model on data. Combinators provide primitives for both model and inference composition. Model combinators take the form of classic functional programming constructs such as map and reduce. These constructs define a computation graph at a coarsened level of representation, in which nodes correspond to models, rather than individual variables. Inference combinators - such as enumeration, importance resampling, and Markov Chain Monte Carlo operators - assume a sampling semantics for model evaluation, in which application of combinators preserves proper weighting. Owing to this property, models defined using combinators can be trained using stochastic methods that optimize either variational or wake-sleep style objectives. As a validation of this principle, we use combinators to implement black box inference for hidden Markov models.


Probabilistic Modeling for Crowdsourcing Partially-Subjective Ratings

AAAI Conferences

While many methods have been proposed to ensure data quality for objective tasks (in which a single correct response is presumed to exist for each item), estimating data quality with subjective tasks remains largely unexplored. Consider the popular task of collecting instance ratings from human judges: while agreement tends be high for instances having extremely good or bad properties, instances with more middling properties naturally elicit a wider variance in opinion. In addition, because such subjectivity permits a valid diversity of responses, it can be difficult to detect if a judge does not undertake the task in good faith. To address this, we propose a probabilistic, heteroskedastic model in which the means and variances of worker responses are modeled as functions of instance attributes. We derive efficient Expectation Maximization (EM) learning and variational inference algorithms for parameter estimation. We apply our model to a large dataset of 24,132 Mechanical Turk ratings of user experience in viewing videos on smartphones with varying hardware capabilities. Results show that our method is effective at both predicting user ratings and in detecting unreliable respondents.


Modeling Procedural State Changes over Time with Probabilistic Soft Logic

AAAI Conferences

Robust natural language understanding involves the automatic extraction and representation of entities, events, and states from unstructured text. However, a significant portion of the knowledge required for human-level understanding is implicit in the text and can only be accessed through inference. In this work, we employ Probabilistic Soft Logic (PSL) as a framework for leveraging common-sense knowledge to support natural language understanding over procedural texts. Under this framework, we combine logical consistency constraints with succinct representations of commonsense knowledge to probabilistically model entity-centric stative information over time. We demonstrate the feasibility of using PSL to represent procedural stative knowledge through a scalability assessment over an in-house, multi-domain, synthetic dataset.