conditional independence relation
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Arizona (0.04)
- North America > Canada (0.04)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.47)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. Identifying conditional independence of variables in graphical models is a key to finding tractable solutions, and faithfulness is a key condition of these relationships. The authors provide necessary and sufficient conditions for determining faithfulness in Gaussian graphical models (based on partitioning variables outside the conditioning set into two disjoint subsets), and show how this theoretical result can be translated into an algorithm for determining if a distribution is faithful. PROS: - Clear, well-written paper with illustrative examples - Addresses a relevant problem and provides a meaningful theoretical result - Provides a practical test for faithfulness in Gaussian graphical model CONS: - Theoretical result is restricted to Gaussian graphical models Quality: This paper provides addresses a theoretical problem (faithfulness in Gaussian graphical models). The claims are well-reasoned and the proofs support the claims. The resulting algorithm provides a useful test of faithfulness.
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Arizona (0.04)
- North America > Canada (0.04)
- Information Technology > Data Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.47)
- North America > United States > New York (0.04)
- North America > United States > Connecticut > New Haven County > New Haven (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York (0.04)
- North America > United States > Connecticut > New Haven County > New Haven (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Classifying Causal Structures: Ascertaining when Classical Correlations are Constrained by Inequalities
Khanna, Shashaank, Ansanelli, Marina Maciel, Pusey, Matthew F., Wolfe, Elie
The classical causal relations between a set of variables, some observed and some latent, can induce both equality constraints (typically conditional independences) as well as inequality constraints (Instrumental and Bell inequalities being prototypical examples) on their compatible distribution over the observed variables. Enumerating a causal structure's implied inequality constraints is generally far more difficult than enumerating its equalities. Furthermore, only inequality constraints ever admit violation by quantum correlations. For both those reasons, it is important to classify causal scenarios into those which impose inequality constraints versus those which do not. Here we develop methods for detecting such scenarios by appealing to d-separation, e-separation, and incompatible supports. Many (perhaps all?) scenarios with exclusively equality constraints can be detected via a condition articulated by Henson, Lal and Pusey (HLP). Considering all scenarios with up to 4 observed variables, which number in the thousands, we are able to resolve all but three causal scenarios, providing evidence that the HLP condition is, in fact, exhaustive.
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- North America > United States > California (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (4 more...)
Restricted Hidden Cardinality Constraints in Causal Models
Zjawin, Beata, Wolfe, Elie, Spekkens, Robert W.
In causal studies, systems of variables are described by causal models [18, 22], which are composed of two elements: (i) the graphical representation of relationships between variables in a model, encoded in a directed acyclic graph, and (ii) the mathematical description of conditional probability distribution of each variable given its causal parents. When a causal model involves hidden (i.e., unobserved) variables, any characterization of the model verifiable by observations should only include observed variables. Therefore, one of the objectives of causal inference is to eliminate all hidden variables from inequalities and equalities that describe the model. In principle, this can be achieved using the Tarski-Seidenberg quantifier elimination method [12]. However, its complexity is such that only models with few variables can be solved using this technique, hence the reason for the many attempts to simplify the problem.
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- Europe > Poland > Pomerania Province > Gdańsk (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Diagnosis (0.84)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.66)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.48)
Representation and Learning of Context-Specific Causal Models with Observational and Interventional Data
We consider the problem of representation and learning of causal models that encode context-specific information for discrete data. To represent such models we define the class of CStrees. This class is a subclass of staged tree models that captures context-specific information in a DAG model by the use of a staged tree, or equivalently, by a collection of DAGs. We provide a characterization of the complete set of asymmetric conditional independence relations encoded by a CStree that generalizes the global Markov property for DAGs. As a consequence, we obtain a graphical characterization of model equivalence for CStrees generalizing that of Verma and Pearl for DAG models. We also provide a closed-form formula for the maximum likelihood estimator of a CStree and use it to show that the Bayesian Information Criterion is a locally consistent score function for this model class. We then use the theory for general interventions in staged tree models to provide a global Markov property and a characterization of model equivalence for general interventions in CStrees. As examples, we apply these results to two real data sets, learning BIC-optimal CStrees for each and analyzing their context-specific causal structure.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Austria > Vienna (0.14)
- Europe > Germany > Saxony-Anhalt > Magdeburg (0.04)
- (5 more...)
Robust Estimation of Tree Structured Ising Models
Katiyar, Ashish, Shah, Vatsal, Caramanis, Constantine
We consider the task of learning Ising models when the signs of different random variables are flipped independently with possibly unequal, unknown probabilities. In this paper, we focus on the problem of robust estimation of tree-structured Ising models. Without any additional assumption of side information, this is an open problem. We first prove that this problem is unidentifiable, however, this unidentifiability is limited to a small equivalence class of trees formed by leaf nodes exchanging positions with their neighbors. Next, we propose an algorithm to solve the above problem with logarithmic sample complexity in the number of nodes and polynomial run-time complexity. Lastly, we empirically demonstrate that, as expected, existing algorithms are not inherently robust in the proposed setting whereas our algorithm correctly recovers the underlying equivalence class.
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > Canada > Alberta (0.04)
Causal Structure Discovery from Distributions Arising from Mixtures of DAGs
Saeed, Basil, Panigrahi, Snigdha, Uhler, Caroline
We consider distributions arising from a mixture of causal models, where each model is represented by a directed acyclic graph (DAG). We provide a graphical representation of such mixture distributions and prove that this representation encodes the conditional independence relations of the mixture distribution. We then consider the problem of structure learning based on samples from such distributions. Since the mixing variable is latent, we consider causal structure discovery algorithms such as FCI that can deal with latent variables. We show that such algorithms recover a "union" of the component DAGs and can identify variables whose conditional distribution across the component DAGs vary. We demonstrate our results on synthetic and real data showing that the inferred graph identifies nodes that vary between the different mixture components. As an immediate application, we demonstrate how retrieval of this causal information can be used to cluster samples according to each mixture component.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > Switzerland > Zürich > Zürich (0.14)
- (2 more...)
- Health & Medicine > Therapeutic Area > Oncology (0.68)
- Health & Medicine > Therapeutic Area > Immunology (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.35)