Probabilistic programming languages allow a modeler to build probabilistic models using complex data structures with all the power of a programming language. We present CTPPL, an expressive probabilistic programming language for dynamic processes that models processes using continuous time. Time is a first class element in our language; the amount of time taken by a subprocess can be specified using the full power of the language. We show through examples that CTPPL can easily represent existing continuous time frameworks and makes it easy to represent new ones. We present semantics for CTPPL in terms of a probability measure over trajectories. We present a particle filtering algorithm for the language that works for a large and useful class of CTPPL programs.
We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods. Papers published at the Neural Information Processing Systems Conference.
My Ph.D. dissertation describes PAGODA (probabilistic autonomous goal-directed agent), a model for an intelligent agent that learns autonomously in domains containing uncertainty. The ultimate goal of this line of research is to develop intelligent problem-solving and planning systems that operate in complex domains, largely function autonomously, use whatever knowledge is available to them, and learn from their experience. PAGODA was motivated by two specific requirements: The agent should be capable of operating with minimal intervention from humans, and it should be able to cope with uncertainty (which can be the result of inaccurate sensors, a nondeterministic environment, complexity, or sensory limitations). I argue that the principles of probability theory and decision theory can be used to build rational agents that satisfy these requirements.
In this paper, we address the problem of merging multiple imprecise probabilistic beliefs represented as Probabilistic Logic Programs (PLPs) obtained from multiple sources. Beliefs in each PLP are modeled as conditional events attached with probability bounds. The major task of syntax-based merging is to obtain the most rational probability bound for each conditional event from the original PLPs to form a new PLP. We require the minimal change principle to be followed so that each source gives up its beliefs as little as possible. Some instantiated merging operators are derived from our merging framework. Furthermore, we propose a set of postulates for merging PLPs, some of which extend the postulates for merging classical knowledge bases, whilst others are specific to the merging of probabilistic beliefs.
We present a partial-order, conformant, probabilistic planner, Probapop which competed in the blind track of the Probabilistic Planning Competition in IPC-4. We explain how we adapt distance based heuristics for use with probabilistic domains. Probapop also incorporates heuristics based on probability of success. We explain the successes and difficulties encountered during the design and implementation of Probapop.