D'Amour, Alexander
Comment: Reflections on the Deconfounder
D'Amour, Alexander
The aim of this comment (set to appear in a formal discussion in JASA) is to draw out some conclusions from an extended back-and-forth I have had with Wang and Blei regarding the deconfounder method proposed in "The Blessings of Multiple Causes" [arXiv:1805.06826]. I will make three points here. First, in my role as the critic in this conversation, I will summarize some arguments about the lack of causal identification in the bulk of settings where the "informal" message of the paper suggests that the deconfounder could be used. This is a point that is discussed at length in D'Amour 2019 [arXiv:1902.10286], which motivated the results concerning causal identification in Theorems 6--8 of "Blessings". Second, I will argue that adding parametric assumptions to the working model in order to obtain identification of causal parameters (a strategy followed in Theorem 6 and in the experimental examples) is a risky strategy, and should only be done when extremely strong prior information is available. Finally, I will consider the implications of the nonparametric identification results provided for a narrow, but non-trivial, set of causal estimands in Theorems 7 and 8. I will highlight that these results may be even more interesting from the perspective of detecting causal identification from observed data, under relatively weak assumptions about confounders.
On Multi-Cause Causal Inference with Unobserved Confounding: Counterexamples, Impossibility, and Alternatives
D'Amour, Alexander
Unobserved confounding is a central barrier to drawing causal inferences from observational data. Several authors have recently proposed that this barrier can be overcome in the case where one attempts to infer the effects of several variables simultaneously. In this paper, we present two simple, analytical counterexamples that challenge the general claims that are central to these approaches. In addition, we show that nonparametric identification is impossible in this setting. We discuss practical implications, and suggest alternatives to the methods that have been proposed so far in this line of work: using proxy variables and shifting focus to sensitivity analysis.
Reducing Reparameterization Gradient Variance
Miller, Andrew C., Foti, Nicholas J., D'Amour, Alexander, Adams, Ryan P.
Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the "reparameterization trick," represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to use more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on non-conjugate multi-level hierarchical models and a Bayesian neural net where we observed gradient variance reductions of multiple orders of magnitude (20-2,000x).