Goto

Collaborating Authors

Bai, Aijun


Bayesian Mixture Modelling and Inference based Thompson Sampling in Monte-Carlo Tree Search

Neural Information Processing Systems

Monte-Carlo tree search is drawing great interest in the domain of planning under uncertainty, particularly when little or no domain knowledge is available. One of the central problems is the trade-off between exploration and exploitation. In this paper we present a novel Bayesian mixture modelling and inference based Thompson sampling approach to addressing this dilemma. The proposed Dirichlet-NormalGamma MCTS (DNG-MCTS) algorithm represents the uncertainty of the accumulated reward for actions in the MCTS search tree as a mixture of Normal distributions and inferences on it in Bayesian settings by choosing conjugate priors in the form of combinations of Dirichlet and NormalGamma distributions. Thompson sampling is used to select the best action at each decision node.


Bai

AAAI Conferences

In order to successfully interact with multiple humans in social situations, an intelligent robot should have the ability to track multi-humans, and understand their motion intentions. We formalize this problem as a hidden Markov model, and estimate the posterior densities by particle filtering over sets approach. Our approach avoids directly performing observation-to-target association by defining a set as a joint state. The human identification problem is then solved in an expectation-maximization way. We evaluate the effectiveness of our approach by both benchamark test and real robot experiments.


Intention-Aware Multi-Human Tracking for Human-Robot Interaction via Particle Filtering over Sets

AAAI Conferences

In order to successfully interact with multiple humans in social situations, an intelligent robot should have the ability to track multi-humans, and understand their motion intentions. We formalize this problem as a hidden Markov model, and estimate the posterior densities by particle filtering over sets approach. Our approach avoids directly performing observation-to-target association by defining a set as a joint state. The human identification problem is then solved in an expectation-maximization way. We evaluate the effectiveness of our approach by both benchamark test and real robot experiments.


Bai

AAAI Conferences

Monte-Carlo tree search (MCTS) has been drawing great interest in recent years for planning under uncertainty. One of the key challenges is the trade-off between exploration and exploitation. To address this, we introduce a novel online planning algorithm for large POMDPs using Thompson sampling based MCTS that balances between cumulative and simple regrets.


Thompson Sampling Based Monte-Carlo Planning in POMDPs

AAAI Conferences

Monte-Carlo tree search (MCTS) has been drawing great interest in recent years for planning under uncertainty. One of the key challenges is the trade-off between exploration and exploitation. To address this, we introduce a novel online planning algorithm for large POMDPs using Thompson sampling based MCTS that balances between cumulative and simple regrets. The proposed algorithm  Dirichlet-Dirichlet-NormalGamma based Partially Observable Monte-Carlo Planning (D 2 NG-POMCP) treats the accumulated reward of performing an action from a belief state in the MCTS search tree as a random variable following an unknown distribution with hidden parameters. Bayesian method is used to model and infer the posterior distribution of these parameters by choosing the conjugate prior in the form of a combination of two Dirichlet and one NormalGamma distributions. Thompson sampling is exploited to guide the action selection in the search tree. Experimental results confirmed that our algorithm outperforms the state-of-the-art approaches on several common benchmark problems.


Bayesian Mixture Modelling and Inference based Thompson Sampling in Monte-Carlo Tree Search

Neural Information Processing Systems

Monte-Carlo tree search is drawing great interest in the domain of planning under uncertainty, particularly when little or no domain knowledge is available. One of the central problems is the trade-off between exploration and exploitation. In this paper we present a novel Bayesian mixture modelling and inference based Thompson sampling approach to addressing this dilemma. The proposed Dirichlet-NormalGamma MCTS (DNG-MCTS) algorithm represents the uncertainty of the accumulated reward for actions in the MCTS search tree as a mixture of Normal distributions and inferences on it in Bayesian settings by choosing conjugate priors in the form of combinations of Dirichlet and NormalGamma distributions. Thompson sampling is used to select the best action at each decision node. Experimental results show that our proposed algorithm has achieved the state-of-the-art comparing with popular UCT algorithm in the context of online planning for general Markov decision processes.