Goto

Collaborating Authors

 Pearl, Judea


Generalized Instrumental Variables

arXiv.org Artificial Intelligence

This paper concerns the assessment of direct causal effects from a combination of: (i) non-experimental data, and (ii) qualitative domain knowledge. Domain knowledge is encoded in the form of a directed acyclic graph (DAG), in which all interactions are assumed linear, and some variables are presumed to be unobserved. We provide a generalization of the well-known method of Instrumental Variables, which allows its application to models with few conditional independeces.


On the Testable Implications of Causal Models with Hidden Variables

arXiv.org Artificial Intelligence

The validity OF a causal model can be tested ONLY IF the model imposes constraints ON the probability distribution that governs the generated data. IN the presence OF unmeasured variables, causal models may impose two types OF constraints : conditional independencies, AS READ through the d - separation criterion, AND functional constraints, FOR which no general criterion IS available.This paper offers a systematic way OF identifying functional constraints AND, thus, facilitates the task OF testing causal models AS well AS inferring such models FROM data.


Transportability of Causal Effects: Completeness Results

AAAI Conferences

The study of transportability aims to identify conditions under which causal information learned from experiments can be reused in a different environment where only passive observations can be collected. The theory introduced in [Pearl and Bareinboim, 2011] (henceforth [PB, 2011]) defines formal conditions for such transfer but falls short of providing an effective procedure for deciding, given assumptions about differences between the source and target domains, whether transportability is feasible. This paper provides such procedure. It establishes a necessary and sufficient condition for deciding when causal effects in the target domain are estimable from both the statistical information available and the causal information transferred from the experiments. The paper further provides a complete algorithm for computing the transport formula, that is, a way of fusing experimental and observational information to synthesize an estimate of the desired causal relation.


Identification of Conditional Interventional Distributions

arXiv.org Artificial Intelligence

The subject of this paper is the elucidation of effects of actions from causal assumptions represented as a directed graph, and statistical knowledge given as a probability distribution. In particular, we are interested in predicting conditional distributions resulting from performing an action on a set of variables and, subsequently, taking measurements of another set. We provide a necessary and sufficient graphical condition for the cases where such distributions can be uniquely computed from the available information, as well as an algorithm which performs this computation whenever the condition holds. Furthermore, we use our results to prove completeness of do-calculus [Pearl, 1995] for the same identification problem.


Graphical Condition for Identification in recursive SEM

arXiv.org Artificial Intelligence

The paper concerns the problem of predicting the effect of actions or interventions on a system from a combination of (i) statistical data on a set of observed variables, and (ii) qualitative causal knowledge encoded in the form of a directed acyclic graph (DAG). The DAG represents a set of linear equations called Structural Equations Model (SEM), whose coefficients are parameters representing direct causal effects. Reliable quantitative conclusions can only be obtained from the model if the causal effects are uniquely determined by the data. That is, if there exists a unique parametrization for the model that makes it compatible with the data. If this is the case, the model is called identified. The main result of the paper is a general sufficient condition for identification of recursive SEM models.


What Counterfactuals Can Be Tested

arXiv.org Artificial Intelligence

Counterfactual statements, e.g., "my headache would be gone had I taken an aspirin" are central to scientific discourse, and are formally interpreted as statements derived from "alternative worlds". However, since they invoke hypothetical states of affairs, often incompatible with what is actually known or observed, testing counterfactuals is fraught with conceptual and practical difficulties. In this paper, we provide a complete characterization of "testable counterfactuals," namely, counterfactual statements whose probabilities can be inferred from physical experiments. We provide complete procedures for discerning whether a given counterfactual is testable and, if so, expressing its probability in terms of experimental data.


Effects of Treatment on the Treated: Identification and Generalization

arXiv.org Artificial Intelligence

Many applications of causal analysis call for assessing, retrospectively, the effect of withholding an action that has in fact been implemented. This counterfactual quantity, sometimes called "effect of treatment on the treated," (ETT) have been used to to evaluate educational programs, critic public policies, and justify individual decision making. In this paper we explore the conditions under which ETT can be estimated from (i.e., identified in) experimental and/or observational studies. We show that, when the action invokes a singleton variable, the conditions for ETT identification have simple characterizations in terms of causal diagrams. We further give a graphical characterization of the conditions under which the effects of multiple treatments on the treated can be identified, as well as ways in which the ETT estimand can be constructed from both interventional and observational distributions.


Transportability of Causal and Statistical Relations: A Formal Approach

AAAI Conferences

We address the problem of transferring information learned from experiments to a different environment, in which only passive observations can be collected. We introduce a formal representation called "selection diagrams" for expressing knowledge about differences and commonalities between environments and, using this representation, we derive procedures for deciding whether effects in the target environment can be inferred from experiments conducted elsewhere. When the answer is affirmative, the procedures identify the set of experiments and observations that need be conducted to license the transport. We further discuss how transportability analysis can guide the transfer of knowledge in non-experimental learning to minimize re-measurement cost and improve prediction power.


Controlling Selection Bias in Causal Inference

AAAI Conferences

Selection bias, caused by preferential exclusion of units (or samples) from the data, is a major obstacle to valid causal inferences, for it cannot be removed or even detected by randomized experiments. This paper highlights several graphical and algebraic methods capable of mitigating and sometimes eliminating this bias.


Reasoning with Cause and Effect

AI Magazine

This article is an edited transcript of a lecture given at IJCAI-99, Stockholm, Sweden, on 4 August 1999. The article summarizes concepts, principles, and tools that were found useful in applications involving causal modeling. The principles are based on structural-model semantics in which functional (or counterfactual) relationships representing autonomous physical processes are the fundamental building blocks. The article presents the conceptual basis of this semantics, illustrates its application in simple problems, and discusses its ramifications to computational and cognitive problems concerning causation.