Yanover, Chen
Benchmarking Framework for Performance-Evaluation of Causal Inference Analysis
Shimoni, Yishai, Yanover, Chen, Karavani, Ehud, Goldschmnidt, Yaara
Causal inference analysis is the estimation of the effects of actions on outcomes. In the context of healthcare data this means estimating the outcome of counter-factual treatments (i.e. including treatments that were not observed) on a patient's outcome. Compared to classic machine learning methods, evaluation and validation of causal inference analysis is more challenging because ground truth data of counter-factual outcome can never be obtained in any real-world scenario. Here, we present a comprehensive framework for benchmarking algorithms that estimate causal effect. The framework includes unlabeled data for prediction, labeled data for validation, and code for automatic evaluation of algorithm predictions using both established and novel metrics. The data is based on real-world covariates, and the treatment assignments and outcomes are based on simulations, which provides the basis for validation. In this framework we address two questions: one of scaling, and the other of data-censoring. The framework is available as open source code at https://github.com/IBM-HRL-MLHLS/IBM-Causal-Inference-Benchmarking-Framework
MAP Estimation, Linear Programming and Belief Propagation with Convex Free Energies
Weiss, Yair, Yanover, Chen, Meltzer, Talya
Finding the most probable assignment (MAP) in a general graphical model is known to be NP hard but good approximations have been attained with max-product belief propagation (BP) and its variants. In particular, it is known that using BP on a single-cycle graph or tree reweighted BP on an arbitrary graph will give the MAP solution if the beliefs have no ties. In this paper we extend the setting under which BP can be used to provably extract the MAP. We define Convex BP as BP algorithms based on a convex free energy approximation and show that this class includes ordinary BP with single-cycle, tree reweighted BP and many other BP variants. We show that when there are no ties, fixed-points of convex max-product BP will provably give the MAP solution. We also show that convex sum-product BP at sufficiently small temperatures can be used to solve linear programs that arise from relaxing the MAP problem. Finally, we derive a novel condition that allows us to derive the MAP solution even if some of the convex BP beliefs have ties. In experiments, we show that our theorems allow us to find the MAP in many real-world instances of graphical models where exact inference using junction-tree is impossible.
Finding the M Most Probable Configurations using Loopy Belief Propagation
Yanover, Chen, Weiss, Yair
Loopy belief propagation (BP) has been successfully used in a number ofdifficult graphical models to find the most probable configuration ofthe hidden variables. In applications ranging from protein folding to image analysis one would like to find not just the best configuration but rather the top M. While this problem has been solved using the junction tree formalism, in many real world problems theclique size in the junction tree is prohibitively large. In this work we address the problem of finding the M best configurations whenexact inference is impossible. We start by developing a new exact inference algorithm for calculating thebest configurations that uses only max-marginals. For approximate inference,we replace the max-marginals with the beliefs calculated using max-product BP and generalized BP. We show empirically thatthe algorithm can accurately and rapidly approximate the M best configurations in graphs with hundreds of variables.
Approximate Inference and Protein-Folding
Yanover, Chen, Weiss, Yair
Side-chain prediction is an important subtask in the protein-folding problem. We show that finding a minimal energy side-chain configuration is equivalent to performing inference in an undirected graphical model. The graphical model is relatively sparse yet has many cycles. We used this equivalence to assess the performance of approximate inference algorithms in a real-world setting. Specifically we compared belief propagation (BP), generalized BP (GBP) and naive mean field (MF).