Review for NeurIPS paper: Walsh-Hadamard Variational Inference for Bayesian Deep Learning
–Neural Information Processing Systems
Weaknesses: The main weakness I see with this paper is its empirical evaluation, which could be more convincing. While the experiments on CNNs show that WHVI is competitive with other approaches on VGG16 while being more parameter efficient (which is impressive), I am not sure how well this is aligned with the goal of the paper. I was under the impression that the goal of the paper was to improve Bayesian inference in deep neural networks (for which I would expect stronger results), but instead the goal might be to reduce the number of model parameters without sacrificing accuracy -- it would be great if the authors could clarify this. Furthermore, I would have liked to see a more extensive evaluation of uncertainty calibration, both in in-domain and especially out-of-domain settings, using e.g. the benchmarks proposed in Ovadia et al. 2019, which would further strenghten the paper. Also, the paper does not compare against state-of-the-art methods for deep uncertainty quantification such as deep ensembles (Lakshminarayanan et al. 2017, Ovadia et al. 2019), which makes it hard to assess the potential impact of the proposed approach.
Neural Information Processing Systems
Jan-25-2025, 12:50:07 GMT
- Technology: