Bayesian Calibration for Monte Carlo Localization

AAAI Conferences

Localization is a fundamental challenge for autonomous robotics. Although accurate and efficient techniques now exist for solving this problem, they require explicit probabilistic models of the robot's motion and sensors. These models are usually obtained from time-consuming and error-prone measurement or tedious manual tuning. In this paper we examine automatic calibration of sensor and motion models from a Bayesian perspective. We introduce an efficient MCMC procedure for sampling from the posterior distribution of the model parameters. We also present a novel extension of particle filters to make use of our posterior parameter samples. Finally, we demonstrate our approach both in simulation and on a physical robot. Our results demonstrate effective inference of model parameters as well as a paradoxical result that using posterior parameter samples can produce more accurate position estimates than the true parameters.


The Informed Sampler: A Discriminative Approach to Bayesian Inference in Generative Computer Vision Models

arXiv.org Machine Learning

Computer vision is hard because of a large variability in lighting, shape, and texture; in addition the image signal is non-additive due to occlusion. Generative models promised to account for this variability by accurately modelling the image formation process as a function of latent variables with prior beliefs. Bayesian posterior inference could then, in principle, explain the observation. While intuitively appealing, generative models for computer vision have largely failed to deliver on that promise due to the difficulty of posterior inference. As a result the community has favoured efficient discriminative approaches. We still believe in the usefulness of generative models in computer vision, but argue that we need to leverage existing discriminative or even heuristic computer vision methods. We implement this idea in a principled way with an "informed sampler" and in careful experiments demonstrate it on challenging generative models which contain renderer programs as their components. We concentrate on the problem of inverting an existing graphics rendering engine, an approach that can be understood as "Inverse Graphics". The informed sampler, using simple discriminative proposals based on existing computer vision technology, achieves significant improvements of inference.


Monte Carlo Localization With Mixture Proposal Distribution

AAAI Conferences

Monte Carlo localization (MCL) is a Bayesian algorithm for mobile robot localization based on particle filters, which has enjoyed great practical success. This paper points out a limitation of MCL which is counterintuitive, namely that better sensors can yield worse results. An analysis of this problem leads to the formulation of a new proposal distribution for the Monte Carlo sampling step. Extensive experimental results with physical robots suggest that the new algorithm is significantly more robust and accurate than plain MCL. Obviously, these results transcend beyond mobile robot localization and apply to a range of particle filter applications.


Learning models of object structure

Neural Information Processing Systems

We present an approach for learning stochastic geometric models of object categories from single view images. We focus here on models expressible as a spatially contiguous assemblage of blocks. Model topologies are learned across groups of images, and one or more such topologies is linked to an object category (e.g. chairs). Fitting learned topologies to an image can be used to identify the object class, as well as detail its geometry. The latter goes beyond labeling objects, as it provides the geometric structure of particular instances. We learn the models using joint statistical inference over structure parameters, camera parameters, and instance parameters. These produce an image likelihood through a statistical imaging model. We use trans-dimensional sampling to explore topology hypotheses, and alternate between Metropolis-Hastings and stochastic dynamics to explore instance parameters. Experiments on images of furniture objects such as tables and chairs suggest that this is an effective approach for learning models that encode simple representations of category geometry and the statistics thereof, and support inferring both category and geometry on held out single view images.


Variational MCMC

arXiv.org Machine Learning

We propose a new class of learning algorithms that combines variational approximation and Markov chain Monte Carlo (MCMC) simulation. Naive algorithms that use the variational approximation as proposal distribution can perform poorly because this approximation tends to underestimate the true variance and other features of the data. We solve this problem by introducing more sophisticated MCMC algorithms. One of these algorithms is a mixture of two MCMC kernels: a random walk Metropolis kernel and a blockMetropolis-Hastings (MH) kernel with a variational approximation as proposaldistribution. The MH kernel allows one to locate regions of high probability efficiently. The Metropolis kernel allows us to explore the vicinity of these regions. This algorithm outperforms variationalapproximations because it yields slightly better estimates of the mean and considerably better estimates of higher moments, such as covariances. It also outperforms standard MCMC algorithms because it locates theregions of high probability quickly, thus speeding up convergence. We demonstrate this algorithm on the problem of Bayesian parameter estimation for logistic (sigmoid) belief networks.