Goto

Collaborating Authors

 marginal posterior


Debiased Bayesian inference for average treatment effects

Kolyan Ray, Botond Szabo

Neural Information Processing Systems

Workinginthestandard potential outcomes framework, we propose a data-driven modification to an arbitrary (nonparametric) prior based on the propensity score that corrects for the first-orderposteriorbias,therebyimprovingperformance.Weillustrateourmethod for Gaussian process (GP) priors using (semi-)synthetic data.



Bayesian Semiparametric Causal Inference: Targeted Doubly Robust Estimation of Treatment Effects

Sert, Gözde, Chakrabortty, Abhishek, Bhattacharya, Anirban

arXiv.org Machine Learning

We propose a semiparametric Bayesian methodology for estimating the average treatment effect (ATE) within the potential outcomes framework using observational data with high-dimensional nuisance parameters. Our method introduces a Bayesian debiasing procedure that corrects for bias arising from nuisance estimation and employs a targeted modeling strategy based on summary statistics rather than the full data. These summary statistics are identified in a debiased manner, enabling the estimation of nuisance bias via weighted observables and facilitating hierarchical learning of the ATE. By combining debiasing with sample splitting, our approach separates nuisance estimation from inference on the target parameter, reducing sensitivity to nuisance model specification. We establish that, under mild conditions, the marginal posterior for the ATE satisfies a Bernstein-von Mises theorem when both nuisance models are correctly specified and remains consistent and robust when only one is correct, achieving Bayesian double robustness. This ensures asymptotic efficiency and frequentist validity. Extensive simulations confirm the theoretical results, demonstrating accurate point estimation and credible intervals with nominal coverage, even in high-dimensional settings. The proposed framework can also be extended to other causal estimands, and its key principles offer a general foundation for advancing Bayesian semiparametric inference more broadly.







A Details of mmTS for Exponential Families For a matrix (vector) M, we let M

Neural Information Processing Systems

The general form for an exponential family likelihood is still retained. The prior-to-posterior conversion simply involves updating the prior parameters with sufficient statistics from the data. The inequality is by Markov's inequality. This concludes the proof.C.1 Proof of Theorem 1 Since we have context generated by some random process, we instead turn to martingales. We see that the choice of action given observed context depends on past rounds.


Poisoning Bayesian Inference via Data Deletion and Replication

Carreau, Matthieu, Naveiro, Roi, Caballero, William N.

arXiv.org Machine Learning

Research in adversarial machine learning (AML) has shown that statistical models are vulnerable to maliciously altered data. However, despite advances in Bayesian machine learning models, most AML research remains concentrated on classical techniques. Therefore, we focus on extending the white-box model poisoning paradigm to attack generic Bayesian inference, highlighting its vulnerability in adversarial contexts. A suite of attacks are developed that allow an attacker to steer the Bayesian posterior toward a target distribution through the strategic deletion and replication of true observations, even when only sampling access to the posterior is available. Analytic properties of these algorithms are proven and their performance is empirically examined in both synthetic and real-world scenarios. With relatively little effort, the attacker is able to substantively alter the Bayesian's beliefs and, by accepting more risk, they can mold these beliefs to their will. By carefully constructing the adversarial posterior, surgical poisoning is achieved such that only targeted inferences are corrupted and others are minimally disturbed.