Hierarchical Semi-Markov Conditional Random Fields for Recursive Sequential Data

Neural Information Processing Systems

Inspired by the hierarchical hidden Markov models (HHMM), we present the hierarchical semi-Markovconditional random field (HSCRF), a generalisation of embedded undirected Markov chains to model complex hierarchical, nested Markov processes. It is parameterised in a discriminative framework and has polynomial time algorithms for learning and inference. Importantly, we develop efficient algorithms forlearning and constrained inference in a partially-supervised setting, which is important issue in practice where labels can only be obtained sparsely. We demonstrate the HSCRF in two applications: (i) recognising human activities of daily living (ADLs) from indoor surveillance cameras, and (ii) noun-phrase chunking. We show that the HSCRF is capable of learning rich hierarchical models withreasonable accuracy in both fully and partially observed data cases.


Hierarchical Semi-Markov Conditional Random Fields for Recursive Sequential Data

arXiv.org Machine Learning

Inspired by the hierarchical hidden Markov models (HHMM), we present the hierarchical semi-Markov conditional random field (HSCRF), a generalisation of embedded undirectedMarkov chains tomodel complex hierarchical, nestedMarkov processes. It is parameterised in a discriminative framework and has polynomial time algorithms for learning and inference. Importantly, we consider partiallysupervised learning and propose algorithms for generalised partially-supervised learning and constrained inference. We demonstrate the HSCRF in two applications: (i) recognising human activities of daily living (ADLs) from indoor surveillance cameras, and (ii) noun-phrase chunking. We show that the HSCRF is capable of learning rich hierarchical models with reasonable accuracy in both fully and partially observed data cases.


Hung H. Bui

AAAI Conferences

Markov Policy (AMP) is a model for representing the execution of an abstract plan in noisy and uncertain domains. Methods for recognising an abstract policy from a sequence of noisy observations thus can be used for online plan recognition under uncertainty. In this paper, we extend previous work on policy recognition and consider a general type of abstract policies, including those with non-deterministic terminating conditions and factored representations of the state space. We analyse the structure of the stochastic model representing the execution of the general AMP and provide an efficient hybrid Rao-Blackwellised sampling method for policy recognition that scales well with the number of levels in the plan hierarchy. This illustrates that while the stochastic models for plan execution can be complex, they exhibit special structures which, if exploited, can lead to efficient plan recognition algorithms.


Cutset Sampling for Bayesian Networks

arXiv.org Artificial Intelligence

The paper presents a new sampling methodology for Bayesian networks that samples only a subset of variables and applies exact inference to the rest. Cutset sampling is a network structure-exploiting application of the Rao-Blackwellisation principle to sampling in Bayesian networks. It improves convergence by exploiting memory-based inference algorithms. It can also be viewed as an anytime approximation of the exact cutset-conditioning algorithm developed by Pearl. Cutset sampling can be implemented efficiently when the sampled variables constitute a loop-cutset of the Bayesian network and, more generally, when the induced width of the networks graph conditioned on the observed sampled variables is bounded by a constant w. We demonstrate empirically the benefit of this scheme on a range of benchmarks.


Cutset Sampling for Bayesian Networks

Journal of Artificial Intelligence Research

The paper presents a new sampling methodology for Bayesian networks that samples only a subset of variables and applies exact inference to the rest. Cutset sampling is a network structure-exploiting application of the Rao-Blackwellisation principle to sampling in Bayesian networks. It improves convergence by exploiting memory-based inference algorithms. It can also be viewed as an anytime approximation of the exact cutset-conditioning algorithm developed by Pearl. Cutset sampling can be implemented efficiently when the sampled variables constitute a loop-cutset of the Bayesian network and, more generally, when the induced width of the network's graph conditioned on the observed sampled variables is bounded by a constant w. We demonstrate empirically the benefit of this scheme on a range of benchmarks.