CAPIR: Collaborative Action Planning with Intention Recognition

AAAI Conferences

We apply decision theoretic techniques to construct non-player characters that are able to assist a human player in collaborative games. The method is based on solving Markov decision processes, which can be difficult when the game state is described by many variables. To scale to more complex games, the method allows decomposition of a game task into subtasks, each of which can be modelled by a Markov decision process. Intention recognition is used to infer the subtask that the human is currently performing, allowing the helper to assist the human in performing the correct task. Experiments show that the method can be effective, giving near-human level performance in helping a human in a collaborative game.

Asymptotic Properties of Recursive Maximum Likelihood Estimation in Non-Linear State-Space Models Machine Learning

Using stochastic gradient search and the optimal filter derivative, it is possible to perform recursive (i.e., online) maximum likelihood estimation in a non-linear state-space model. As the optimal filter and its derivative are analytically intractable for such a model, they need to be approximated numerically. In [Poyiadjis, Doucet and Singh, Biometrika 2018], a recursive maximum likelihood algorithm based on a particle approximation to the optimal filter derivative has been proposed and studied through numerical simulations. Here, this algorithm and its asymptotic behavior are analyzed theoretically. We show that the algorithm accurately estimates maxima to the underlying (average) log-likelihood when the number of particles is sufficiently large. We also derive (relatively) tight bounds on the estimation error. The obtained results hold under (relatively) mild conditions and cover several classes of non-linear state-space models met in practice.

A Bayesian Network for Real-Time Musical Accompaniment

Neural Information Processing Systems

We describe a computer system that provides a real-time musical accompanimentfor a live soloist in a piece of non-improvised music for soloist and accompaniment. A Bayesian network is developed thatrepresents the joint distribution on the times at which the solo and accompaniment notes are played, relating the two parts through a layer of hidden variables. The network is first constructed usingthe rhythmic information contained in the musical score. The network is then trained to capture the musical interpretations ofthe soloist and accompanist in an off-line rehearsal phase. During live accompaniment the learned distribution of the network is combined with a real-time analysis of the soloist's acoustic signal, performedwith a hidden Markov model, to generate a musically principledaccompaniment that respects all available sources of knowledge. A live demonstration will be provided.

Probabilistic Interactive Installations

AAAI Conferences

We present a description of two small audio/visual immersive installations. The main framework is an interactive structure that enables multiple participants to generate jazz improvisations, loosely speaking. The first uses a Bayesian Network to respond to sung or played pitches with machine pitches, in a kind of constrained harmonic way. The second uses Bayesian Networks and Hidden Markov Models to track human motion, play reactive chords, and to respond to pitches both aurally and visually.

Multiagent Stochastic Planning With Bayesian Policy Recognition

AAAI Conferences

When operating in stochastic, partially observable, multiagent settings, it is crucial to accurately predict the actions of other agents. In my thesis work, I propose methodologies for learning the policy of external agents from their observed behavior, in the form of finite state controllers. To perform this task, I adopt Bayesian learning algorithms based on nonparametric prior distributions, that provide the flexibility required to infer models of unknown complexity. These methods are to be embedded in decision making frameworks for autonomous planning in partially observable multiagent systems.