In this paper, we consider a hybrid solution to the sensor network position inference problem, which combines a real-time filtering system with information from a more expensive, global inference procedure to improve accuracy and prevent divergence. Many online solutions for this problem make use of simplifying assumptions, such as Gaussian noise models and linear system behaviour and also adopt a filtering strategy which may not use available information optimally. These assumptions allow near real-time inference, while also limiting accuracy and introducing the potential for ill-conditioning and divergence. We consider augmenting a particular realtime estimation method, the extended Kalman filter (EKF), with a more complex, but more highly accurate, inference technique based on Markov Chain Monte Carlo (MCMC) methodology. Conventional MCMC techniques applied to this problem can entail significant and time consuming computation to achieve convergence. To address this, we propose an intelligent bootstrapping process and the use of parallel, communicative chains of different temperatures, commonly referred to as parallel tempering. The combined approach is shown to provide substantial improvement in a realistic simulated mapping environment and when applied to a complex physical system involving a robotic platform moving in an office environment instrumented with a camera sensor network.
In this paper, we propose a latent multi-task learning algorithm to solve the multi-device indoor localization problem. Traditional indoor localization systems often assume that the collected signal data distributions are fixed, and thus the localization model learned on one device can be used on other devices without adaptation. However, by empirically studying the signal variation over different devices, we found this assumption to be invalid in practice. To solve this problem, we treat multiple devices as multiple learning tasks, and propose a multi-task learning algorithm. Different from algorithms assuming that the hypotheses learned from the original data space for related tasks can be similar, we only require the hypotheses learned in a latent feature space are similar. To establish our algorithm, we employ an alternating optimization approach to iteratively learn feature mappings and multi-task regression models for the devices. We apply our latent multi-task learning algorithm to real-world indoor localization data and demonstrate its effectiveness.
A key element of transfer learning is to identify structured knowledge to enable the knowledge transfer. Structured knowledge comes in different forms, depending on the nature of the learning problem and characteristics of the domains. In this article, we describe three of our recent works on transfer learning in a progressively more sophisticated order of the structured knowledge being transferred. We show that optimization methods and techniques inspired by the concerns of data reuse can be applied to extract and transfer deep structural knowledge between a variety of source and target problems. This often happens when we meet with new domains and encounter new tasks.
Monte Carlo Localization (MCL) is a sampling-based algorithm for mobile robot localization. In this paper we describe an MCL assignment and its required hardware and software. The Neato vacuum robot and a Raspberry Pi serve as the core of the robot model. The Robot Operating System (ROS) is used as the robot programming environment. Students are expected to learn the localization problem, implement the MCL algorithm, and better understand the kidnapped robot problem and the limitations of MCL by observing the performance of the algorithm in real-time application.
This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computationally cumbersome (such as grid-based approaches that represent the state space by high-resolution 3D grids), or had to resort to extremely coarse-grained resolutions. Our approach is computationally efficient while retaining the ability to represent (almost) arbitrary distributions. MCL applies sampling-based methods for approximating probability distributions, in a way that places computation "where needed." The number of samples is adapted online, thereby invoking large sample sets only when necessary. Empirical results illustrate that MCL yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches. It is also much easier to implement.