Dormant Independence

AAAI Conferences

The construction of causal graphs from non-experimental data rests on a set of constraints that the graph structure imposes on all probability distributions compatible with the graph. These constraints are of two types: conditional independencies and algebraic constraints, first noted by Verma. While conditional independencies are well studied and frequently used in causal induction algorithms, Verma constraints are still poorly understood, and rarely applied. In this paper we examine a special subset of Verma constraints which are easy to understand, easy to identify and easy to apply; they arise from "dormant independencies," namely, conditional independencies that hold in interventional distributions. We give a complete algorithm for determining if a dormant independence between two sets of variables is entailed by the causal graph, such that this independence is identifiable, in other words if it resides in an interventional distribution that can be predicted without resorting to interventions. We further show the usefulness of dormant independencies in model testing and induction by giving an algorithm that uses constraints entailed by dormant independencies to prune extraneous edges from a given causal graph.



Link

AAAI Conferences

The implication problem of probabilistic conditional independencies is investigated in the presence of missing data. Here, graph separation axioms fail to hold for saturated conditional independencies, unlike the known idealized case with no missing data. Several axiomatic, algorithmic, and logical characterizations of the implication problem for saturated conditional independencies are established. In particular, equivalences are shown to the implication problem of a propositional fragment under Levesque's situations, and that of Lien's class of multivalued database dependencies under null values.


Butz

AAAI Conferences

Testing independencies in Bayesian networks (BNs) is a fundamental task in probabilistic reasoning. In this paper, we propose inaugural-separation (i-separation) as a new method for testing independencies in BNs. We establish the correctness of i-separation. Our method has several theoretical and practical advantages. There are at least five ways in which i-separation is simpler than d-separation, the classical method for testing independencies in BNs, of which the most important is that "blocking" works in an intuitive fashion. In practice, our empirical evaluation shows that i-separation tends to be faster than d-separation in large BNs.


Whether Non-Correlation Implies Non-Causation

AAAI Conferences

It has been well argued that correlation does not imply causation. Is the converse true: does non-correlation imply non-causation, or more plainly, does causation imply correlation? Here we argue that this is a useful intuition of the semantic essence of the faithfulness assumption of causal graphs. Although the statement is intuitively reasonable, it is not categorically true (but it is true with probability one), and this brings into question the validity of causal graphs. This work reviews Cartwright's arguments against faithfulness and presents a philosophical case in favor of the faithfulness assumption. This work also shows how the causal graph formalism can be used to troubleshoot scenarios where faithfulness is violated.