Goto

Collaborating Authors

 neural network uncertainty


Reviews: Bayesian Layers: A Module for Neural Network Uncertainty

Neural Information Processing Systems

I am still voting for acceptance of this paper. This paper is about a software component, called Bayesian Layers, that allows for consistent creation of deep layers that are associated with some form of uncertainty or stochasticity. The paper outlines the design philosophy and principles, shows many examples and concludes with new demonstrations of Bayesian neural network applications. I find that this work is on a significant topic, since software for Bayesian (deep) learning models significantly lacks behind. Integration and drop-in replacement with traditional architectures seems like the right avenue to pursue, and is a strong motivation point for this approach. I also think that this work is sufficiently original, related to what one could expect form a software component.


Reviews: Bayesian Layers: A Module for Neural Network Uncertainty

Neural Information Processing Systems

This work was debated controversially among the reviewers. They all agreed that the work was presented well, and both the idea of the paper and how it is realised as a software interface are novel (or at least a clear improvement over existing frameworks). Software packages generally struggle to get accepted at major conferences. I would thus like to throw my own vote in for this paper. It is true that the community has not yet developed a good and consistent way to evaluate software contributions, in particular vis-a-vis theoretical and empirical papers. But it is high time that our community becomes more professional in software development.


Bayesian Layers: A Module for Neural Network Uncertainty

Neural Information Processing Systems

We describe Bayesian Layers, a module designed for fast experimentation with neural network uncertainty. It extends neural network libraries with drop-in replacements for common layers. This enables composition via a unified abstraction over deterministic and stochastic functions and allows for scalability via the underlying system. These layers capture uncertainty over weights (Bayesian neural nets), pre-activation units (dropout), activations (stochastic output layers''), or the function itself (Gaussian processes). They can also be reversible to propagate uncertainty from input to output.