Goto

Collaborating Authors

markov property


"What's that? Reinforcement Learning in the Real-world?"

#artificialintelligence

Reinforcement Learning offers a distinctive way of solving the Machine Learning puzzle. It's sequential decision-making ability, and suitability to tasks requiring a trade-off between immediate and long-term returns are some components that make it desirable in settings where supervised-learning or unsupervised learning approaches would, in comparison, not fit as well. By having agents start with zero knowledge then learn qualitatively good behaviour through interaction with the environment, it's almost fair to say Reinforcement Learning (RL) is the closest thing we have to Artificial General Intelligence yet. We can see RL being used in robotics control, treatment design in healthcare, among others; but why aren't we boasting of many RL agents being scaled up to real-world production systems? There's a reason why games, like Atari, are such nice RL benchmarks -- they let us care only about maximizing the score and not worry about designing a reward function.


Does the Markov Decision Process Fit the Data: Testing for the Markov Property in Sequential Decision Making

arXiv.org Machine Learning

The Markov assumption (MA) is fundamental to the empirical validity of reinforcement learning. In this paper, we propose a novel Forward-Backward Learning procedure to test MA in sequential decision making. The proposed test does not assume any parametric form on the joint distribution of the observed data and plays an important role for identifying the optimal policy in high-order Markov decision processes and partially observable MDPs. We apply our test to both synthetic datasets and a real data example from mobile health studies to illustrate its usefulness.


Causal Structure Discovery from Distributions Arising from Mixtures of DAGs

arXiv.org Machine Learning

We consider distributions arising from a mixture of causal models, where each model is represented by a directed acyclic graph (DAG). We provide a graphical representation of such mixture distributions and prove that this representation encodes the conditional independence relations of the mixture distribution. We then consider the problem of structure learning based on samples from such distributions. Since the mixing variable is latent, we consider causal structure discovery algorithms such as FCI that can deal with latent variables. We show that such algorithms recover a "union" of the component DAGs and can identify variables whose conditional distribution across the component DAGs vary. We demonstrate our results on synthetic and real data showing that the inferred graph identifies nodes that vary between the different mixture components. As an immediate application, we demonstrate how retrieval of this causal information can be used to cluster samples according to each mixture component.


Before we can find a model, we must forget about perfection

arXiv.org Artificial Intelligence

With Reinforcement Learning we assume that a model of the world does exist. We assume furthermore that the model in question is perfect (i.e. it describes the world completely and unambiguously). This article will demonstrate that it does not make sense to search for the perfect model because this model is too complicated and practically impossible to find. We will show that we should abandon the pursuit of perfection and pursue Event-Driven (ED) models instead. These models are generalization of Markov Decision Process (MDP) models. This generalization is essential because nothing can be found without it. Rather than a single MDP, we will aim to find a raft of neat simple ED models each one describing a simple dependency or property. In other words, we will replace the search for a singular and complex perfect model with a search for a large number of simple models.


On perfectness in Gaussian graphical models

arXiv.org Machine Learning

Knowing when a graphical model is perfect to a distribution is essential in order to relate separation in the graph to conditional independence in the distribution, and this is particularly important when performing inference from data. When the model is perfect, there is a one-to-one correspondence between conditional independence statements in the distribution and separation statements in the graph. Previous work has shown that almost all models based on linear directed acyclic graphs as well as Gaussian chain graphs are perfect, the latter of which subsumes Gaussian graphical models (i.e., the undirected Gaussian models) as a special case. However, the complexity of chain graph models leads to a proof of this result which is indirect and mired by the complications of parameterizing this general class. In this paper, we directly approach the problem of perfectness for the Gaussian graphical models, and provide a new proof, via a more transparent parametrization, that almost all such models are perfect. Our approach is based on, and substantially extends, a construction of Ln\v{e}ni\v{c}ka and Mat\'u\v{s} showing the existence of a perfect Gaussian distribution for any graph.


Introduction to Deep Q-Learning for Reinforcement Learning (in Python)

#artificialintelligence

I have always been fascinated with games. The seemingly infinite options available to perform an action under a tight timeline – it's a thrilling experience. So when I read about the incredible algorithms DeepMind was coming up with (like AlphaGo and AlphaStar), I was hooked. I wanted to learn how to make these systems on my own machine. And that led me into the world of deep reinforcement learning (Deep RL).


Markov Properties of Discrete Determinantal Point Processes

arXiv.org Machine Learning

Determinantal point processes (DPPs) are probabilistic models for repulsion. When used to represent the occurrence of random subsets of a finite base set, DPPs allow to model global negative associations in a mathematically elegant and direct way. Discrete DPPs have become popular and computationally tractable models for solving several machine learning tasks that require the selection of diverse objects, and have been successfully applied in numerous real-life problems. Despite their popularity, the statistical properties of such models have not been adequately explored. In this note, we derive the Markov properties of discrete DPPs and show how they can be expressed using graphical models.


Causal Calculus in the Presence of Cycles, Latent Confounders and Selection Bias

arXiv.org Machine Learning

We prove the main rules of causal calculus (also called do-calculus) for interventional structural causal models (iSCMs), a generalization of a recently proposed general class of non-/linear structural causal models that allow for cycles, latent confounders and arbitrary probability distributions. We also generalize adjustment criteria and formulas from the acyclic setting to the general one (i.e. iSCMs). Such criteria then allow to estimate (conditional) causal effects from observational data that was (partially) gathered under selection bias and cycles. This generalizes the backdoor criterion, the selection-backdoor criterion and extensions of these to arbitrary iSCMs. Together, our results thus enable causal reasoning in the presence of cycles, latent confounders and selection bias.


On a hypergraph probabilistic graphical model

arXiv.org Artificial Intelligence

We propose a directed acyclic hypergraph framework for a probabilistic graphical model that we call Bayesian hypergraphs. The space of directed acyclic hypergraphs is much larger than the space of chain graphs. Hence Bayesian hypergraphs can model much finer factorizations than Bayesian networks or LWF chain graphs and provide simpler and more computationally efficient procedures for factorizations and interventions. Bayesian hypergraphs also allow a modeler to represent causal patterns of interaction such as Noisy-OR graphically (without additional annotations). We introduce global, local and pairwise Markov properties of Bayesian hypergraphs and prove under which conditions they are equivalent. We define a projection operator, called shadow, that maps Bayesian hypergraphs to chain graphs, and show that the Markov properties of a Bayesian hypergraph are equivalent to those of its corresponding chain graph. We extend the causal interpretation of LWF chain graphs to Bayesian hypergraphs and provide corresponding formulas and a graphical criterion for intervention.


Markov Property in Generative Classifiers

arXiv.org Machine Learning

Generative classifiers are a wide class of machine learning models that consist in estimating the joint probability distributions over the predictor and class variables. From the estimated distribution a decision can be made over the class variable given the values of the predictors. Algebraic and geometric methods can be valuable tools in dealing with discrete probabilities as graphical models(Garcia et al., 2005; Settimi and Smith, 1998), contingency tables and exponential families (Diaconis and Sturmfels, 1995; Fienberg and Gilbert, 1970). Varando et al. (2015) have studied the decision functions induced by a large class of generative classifiers based on Bayesian networks, extending the results of Minsky (1961); Peot (1996) and Jaeger (2003). Ling and Zhang (2002) have described the complexity of Bayesian network classifiers linking the graph structure with the maximum order of the XORs that are representable by the corresponding classifier. In this article we develop a framework to study generative binary classifiers, over categorical predictors, under conditional independences.