Goto

Collaborating Authors

 implementation and hyperparameter sweep



A code

Neural Information Processing Systems

This section is meant to give an overview of our opensource code. Together with this git repo, we include a'tutorial colab' - a Jupyter notebooks that can be run in the browser without requiring any local installation at We view this open-source effort as a major contribution of our paper. We present the testbed pseudocode in this section. Recall from Section 3.1 that we We now describe the other parameters we use in the Testbed. In this section, we describe the benchmark agents in Section 3.3 and the choice of various Step 3: compute likelihoods for n = 1, 2, . . .


Evaluating Predictive Distributions: Does Bayesian Deep Learning Work?

Osband, Ian, Wen, Zheng, Asghari, Seyed Mohammad, Dwaracherla, Vikranth, Hao, Botao, Ibrahimi, Morteza, Lawson, Dieterich, Lu, Xiuyuan, O'Donoghue, Brendan, Van Roy, Benjamin

arXiv.org Machine Learning

Posterior predictive distributions quantify uncertainties ignored by point estimates. This paper introduces \textit{The Neural Testbed}, which provides tools for the systematic evaluation of agents that generate such predictions. Crucially, these tools assess not only the quality of marginal predictions per input, but also joint predictions given many inputs. Joint distributions are often critical for useful uncertainty quantification, but they have been largely overlooked by the Bayesian deep learning community. We benchmark several approaches to uncertainty estimation using a neural-network-based data generating process. Our results reveal the importance of evaluation beyond marginal predictions. Further, they reconcile sources of confusion in the field, such as why Bayesian deep learning approaches that generate accurate marginal predictions perform poorly in sequential decision tasks, how incorporating priors can be helpful, and what roles epistemic versus aleatoric uncertainty play when evaluating performance. We also present experiments on real-world challenge datasets, which show a high correlation with testbed results, and that the importance of evaluating joint predictive distributions carries over to real data. As part of this effort, we opensource The Neural Testbed, including all implementations from this paper.