Goto

Collaborating Authors

 pd-end 2


A Loss Derivation

Neural Information Processing Systems

In this section we provide a more detailed derivation of the proposed loss function (Equation 17). We make use of the fact that the negative entropy of the Dirichlet distribution is equivalent to the reverse KL-divergence to a flat Dirichlet, up to an additive constant which doesn't depend on the We resolved this by using a single LayerNorm layer just before the final output layer. We suspect that a more numerically stable implementation of the loss would not require LayerNorm. Additionally, we examined the models' median precisions ( Let's examine how to emulate an ensemble of auto-regressive models using Prior Networks. Measures of Uncertainty Let's examine how given this model we can obtain measures of sequence-level total and knowledge uncertainty.