Learning Graphical Models


The Mysterious Math of How Cells Determine Their Own Fate

WIRED

In 1891, when the German biologist Hans Driesch split two-cell sea urchin embryos in half, he found that each of the separated cells then gave rise to its own complete, albeit smaller, larva. Somehow, the halves "knew" to change their entire developmental program: At that stage, the blueprint for what they would become had apparently not yet been drawn out, at least not in ink. Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences. Since then, scientists have been trying to understand what goes into making this blueprint, and how instructive it is. It's now known that some form of positional information makes genes variously switch on and off throughout the embryo, giving cells distinct identities based on their location.


How to Improve Political Forecasts - Issue 70: Variables

Nautilus

The 2020 Democratic candidates are out of the gate and the pollsters have the call! Bernie Sanders is leading by two lengths with Kamala Harris and Elizabeth Warren right behind, but Cory Booker and Beto O'Rourke are coming on fast! The political horse-race season is upon us and I bet I know what you are thinking: "Stop!" Every election we complain about horse-race coverage and every election we stay glued to it all the same. The problem with this kind of coverage is not that it's unimportant.


Counterexample-Guided Strategy Improvement for POMDPs Using Recurrent Neural Networks

arXiv.org Artificial Intelligence

We study strategy synthesis for partially observable Markov decision processes (POMDPs). The particular problem is to determine strategies that provably adhere to (probabilistic) temporal logic constraints. This problem is computationally intractable and theoretically hard. We propose a novel method that combines techniques from machine learning and formal verification. First, we train a recurrent neural network (RNN) to encode POMDP strategies. The RNN accounts for memory-based decisions without the need to expand the full belief space of a POMDP. Secondly, we restrict the RNN-based strategy to represent a finite-memory strategy and implement it on a specific POMDP. For the resulting finite Markov chain, efficient formal verification techniques provide provable guarantees against temporal logic specifications. If the specification is not satisfied, counterexamples supply diagnostic information. We use this information to improve the strategy by iteratively training the RNN. Numerical experiments show that the proposed method elevates the state of the art in POMDP solving by up to three orders of magnitude in terms of solving times and model sizes.


15 Great Articles about Bayesian Methods and Networks

#artificialintelligence

This resource is part of a series on specific topics related to data science: regression, clustering, neural networks, deep learning, decision trees, ensembles, correlation, Python, R, Tensorflow, SVM, data reduction, feature selection, experimental design, cross-validation, model fitting, and many more. To keep receiving these articles, sign up on DSC.


Machine Learning Interview Questions and Answers

#artificialintelligence

Credo systemz are making it a cakewalk for you by providing a list of most probable Machine learning interview questions. These interview questions and answers are framed by a Machine learning Engineer. This set of Machine learning interview questions and answers is the perfect guide for you to learn all the concepts required to clear a Machine learning interview. To get in-depth knowledge on Machine learning, you can enroll for live Machine learning Certification Training by Credo systemz with 24/7 support and lifetime access. In answering this question, try to show you understand of the broad applications... What is bucketing in machine learning?Converting a (usually continuous) feature into multiple binary... What are the advantages of Naive Bayes?In a Naïve Bayes classifier will converge quicker than discriminative... What is inductive machine learning?The inductive machine learning involves the process of learning... What Are The Three Stages To Build The Model In Machine Learning?(a).


Incremental Learning of Discrete Planning Domains from Continuous Perceptions

arXiv.org Artificial Intelligence

We propose a framework for learning discrete deterministic planning domains. In this framework, an agent learns the domain by observing the action effects through continuous features that describe the state of the environment after the execution of each action. Besides, the agent learns its perception function, i.e., a probabilistic mapping between state variables and sensor data represented as a vector of continuous random variables called perception variables. We define an algorithm that updates the planning domain and the perception function by (i) introducing new states, either by extending the possible values of state variables, or by weakening their constraints; (ii) adapts the perception function to fit the observed data (iii) adapts the transition function on the basis of the executed actions and the effects observed via the perception function. The framework is able to deal with exogenous events that happen in the environment.


Contextual Markov Decision Processes using Generalized Linear Models

arXiv.org Artificial Intelligence

We consider the recently proposed reinforcement learning (RL) framework of Contextual Markov Decision Processes (CMDP), where the agent has a sequence of episodic interactions with tabular environments chosen from a possibly infinite set. The parameters of these environments depend on a context vector that is available to the agent at the start of each episode. In this paper, we propose a no-regret online RL algorithm in the setting where the MDP parameters are obtained from the context using generalized linear models (GLMs). The proposed algorithm \texttt{GL-ORL} relies on efficient online updates and is also memory efficient. Our analysis of the algorithm gives new results in the logit link case and improves previous bounds in the linear case. Our algorithm uses efficient Online Newton Step updates to build confidence sets. Moreover, for any strongly convex link function, we also show a generic conversion from any online no-regret algorithm to confidence sets.


A Multi-armed Bandit MCMC, with applications in sampling from doubly intractable posterior

arXiv.org Artificial Intelligence

Markov chain Monte Carlo (MCMC) algorithms are widely used to sample from complicated distributions, especially to sample from the posterior distribution in Bayesian inference. However, MCMC is not directly applicable when facing the doubly intractable problem. In this paper, we discussed and compared two existing solutions -- Pseudo-marginal Monte Carlo and Exchange Algorithm. This paper also proposes a novel algorithm: Multi-armed Bandit MCMC (MABMC), which chooses between two (or more) randomized acceptance ratios in each step. MABMC could be applied directly to incorporate Pseudo-marginal Monte Carlo and Exchange algorithm, with higher average acceptance probability.


Understanding Agent Incentives using Causal Influence Diagrams. Part I: Single Action Settings

arXiv.org Artificial Intelligence

Agents are systems that optimize an objective function in an environment. Together, the goal and the environment induce secondary objectives, incentives. Modeling the agent-environment interaction in graphical models called influence diagrams, we can answer two fundamental questions about an agent's incentives directly from the graph: (1) which nodes is the agent incentivized to observe, and (2) which nodes is the agent incentivized to influence? The answers tell us which information and influence points need extra protection. For example, we may want a classifier for job applications to not use the ethnicity of the candidate, and a reinforcement learning agent not to take direct control of its reward mechanism. Different algorithms and training paradigms can lead to different influence diagrams, so our method can be used to identify algorithms with problematic incentives and help in designing algorithms with better incentives.


Markov Networks: Undirected Graphical Models

#artificialintelligence

This article briefs you about Markov Networks which falls under the family of Undirected Graphical Models (UGM). This article is a follow-up to Bayesian Network, which is a type of Directed Graphical Models. Key Motivation behind these networks is to parameterize the Joint Probability Distribution based on Local Independencies between Random Variables. Generally, Bayesian Network requires to pre-define a directionality to assert an influence of random variable. But there might be cases where interaction between nodes ( or random variables) are symmetric in nature, and we would like to have a model which can represent this symmetricity without directional influence.