Goto

Collaborating Authors

 Markov Models


Asymptotically Optimal Sequential Testing with Markovian Data

Sethi, Alhad, Sagar, Kavali Sofia, Agrawal, Shubhada, Basu, Debabrota, Karthik, P. N.

arXiv.org Machine Learning

We study one-sided and $α$-correct sequential hypothesis testing for data generated by an ergodic Markov chain. The null hypothesis is that the unknown transition matrix belongs to a prescribed set $P$ of stochastic matrices, and the alternative corresponds to a disjoint set $Q$. We establish a tight non-asymptotic instance-dependent lower bound on the expected stopping time of any valid sequential test under the alternative. Our novel analysis improves the existing lower bounds, which are either asymptotic or provably sub-optimal in this setting. Our lower bound incorporates both the stationary distribution and the transition structure induced by the unknown Markov chain. We further propose an optimal test whose expected stopping time matches this lower bound asymptotically as $α\to 0$. We illustrate the usefulness of our framework through applications to sequential detection of model misspecification in Markov Chain Monte Carlo and to testing structural properties, such as the linearity of transition dynamics, in Markov decision processes. Our findings yield a sharp and general characterization of optimal sequential testing procedures under Markovian dependence.


Bayesian Control of Large MDPs with Unknown Dynamics in Data-Poor Environments

Neural Information Processing Systems

We propose a Bayesian decision making framework for control of Markov Decision Processes (MDPs) with unknown dynamics and large, possibly continuous, state, action, and parameter spaces in data-poor environments. Most of the existing adaptive controllers for MDPs with unknown dynamics are based on the reinforcement learning framework and rely on large data sets acquired by sustained direct interaction with the system or via a simulator. This is not feasible in many applications, due to ethical, economic, and physical constraints. The proposed framework addresses the data poverty issue by decomposing the problem into an offline planning stage that does not rely on sustained direct interaction with the system or simulator and an online execution stage. In the offline process, parallel Gaussian process temporal difference (GPTD) learning techniques are employed for near-optimal Bayesian approximation of the expected discounted reward over a sample drawn from the prior distribution of unknown parameters. In the online stage, the action with the maximum expected return with respect to the posterior distribution of the parameters is selected. This is achieved by an approximation of the posterior distribution using a Markov Chain Monte Carlo (MCMC) algorithm, followed by constructing multiple Gaussian processes over the parameter space for efficient prediction of the means of the expected return at the MCMC sample. The effectiveness of the proposed framework is demonstrated using a simple dynamical system model with continuous state and action spaces, as well as a more complex model for a metastatic melanoma gene regulatory network observed through noisy synthetic gene expression data.


Relational neurosymbolic Markov models

AIHub

Our most powerful artificial agents cannot be told exactly what to do, especially in complex planning environments. They almost exclusively rely on neural networks to perform their tasks, but neural networks cannot easily be told to obey certain rules or adhere to existing background knowledge. While such uncontrolled behaviour might be nothing more than a simple annoyance next time you ask an LLM to generate a schedule for reaching a deadline in two days and it starts to hallucinate that days have 48 hours instead of 24, it can be much more impactful when that same LLM is controlling an agent responsible for navigating a warehouse filled with TNT and it decides to go just a little too close to the storage compartments. Luckily, controlling neural networks has gained a lot of attention over the last years through the development of . Neurosymbolic AI, or NeSy for short, aims to combine the learning abilities of neural networks with the guarantees that symbolic methods based on automated mathematical reasoning offer.