Goto

Collaborating Authors

 inbayesianneuralnetwork


PosteriorRefinementImprovesSampleEfficiency inBayesianNeuralNetworks

Neural Information Processing Systems

Its derivation, based on Lu et al.[54] is as follows. For the HMC baseline, we use the default implementation of NUTS in Pyro. In Table 7, we present the detailed, non-averaged results to complement Table 4. In both cases, we observe that the performance of the refined posterior approaches HMC's. C.2 Textclassification We further validate the proposed method on text classification problems.


PosteriorRefinementImprovesSampleEfficiency inBayesianNeuralNetworks

Neural Information Processing Systems

Due to the non-linearity of NNs, no analytic solution to the integral exists, even when the likelihood and the approximate posterior are both Gaussian. A low-cost, unbiased, stochastic approximation can be obtained via Monte Carlo (MC) integration: obtainS samples from the approximate posterior and then compute the empirical expectation of the likelihood w.r.t.


IncorporatingInterpretableOutputConstraints inBayesianNeuralNetworks

Neural Information Processing Systems

The ability to encode informative functional beliefs in BNN priors can significantly reduce the bias and uncertainty of the posterior predictive, especially in regions of input space sparsely coveredbytraining data[27].