Conan-Guez, Brieuc
On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss
Fellaji, Mohammed, Pennerath, Frédéric, Conan-Guez, Brieuc, Couceiro, Miguel
The calibration of predictive distributions has been widely studied in deep learning, but the same cannot be said about the more specific epistemic uncertainty as produced by Deep Ensembles, Bayesian Deep Networks, or Evidential Deep Networks. Although measurable, this form of uncertainty is difficult to calibrate on an objective basis as it depends on the prior for which a variety of choices exist. Nevertheless, epistemic uncertainty must in all cases satisfy two formal requirements: firstly, it must decrease when the training dataset gets larger and, secondly, it must increase when the model expressiveness grows. Despite these expectations, our experimental study shows that on several reference datasets and models, measures of epistemic uncertainty violate these requirements, sometimes presenting trends completely opposite to those expected. These paradoxes between expectation and reality raise the question of the true utility of epistemic uncertainty as estimated by these models. A formal argument suggests that this disagreement is due to a poor approximation of the posterior distribution rather than to a flaw in the measure itself. Based on this observation, we propose a regularization function for deep ensembles, called conflictual loss in line with the above requirements. We emphasize its strengths by showing experimentally that it fulfills both requirements of epistemic uncertainty, without sacrificing either the performance nor the calibration of the deep ensembles.
Clarity: an improved gradient method for producing quality visual counterfactual explanations
Theobald, Claire, Pennerath, Frédéric, Conan-Guez, Brieuc, Couceiro, Miguel, Napoli, Amedeo
Visual counterfactual explanations identify modifications to an image that would change the prediction of a classifier. We propose a set of techniques based on generative models (VAE) and a classifier ensemble directly trained in the latent space, which all together, improve the quality of the gradient required to compute visual counterfactuals. These improvements lead to a novel classification model, Clarity, which produces realistic counterfactual explanations over all images. We also present several experiments that give insights on why these techniques lead to better quality results than those in the literature. The explanations produced are competitive with the state-of-the-art and emphasize the importance of selecting a meaningful input space for training.
A Bayesian Neural Network based on Dropout Regulation
Theobald, Claire, Pennerath, Frédéric, Conan-Guez, Brieuc, Couceiro, Miguel, Napoli, Amedeo
Bayesian Neural Networks (BNN) have recently emerged in the Deep Learning world for dealing with uncertainty estimation in classification tasks, and are used in many application domains such as astrophysics, autonomous driving...BNN assume a prior over the weights of a neural network instead of point estimates, enabling in this way the estimation of both aleatoric and epistemic uncertainty of the model prediction.Moreover, a particular type of BNN, namely MC Dropout, assumes a Bernoulli distribution on the weights by using Dropout.Several attempts to optimize the dropout rate exist, e.g. using a variational approach.In this paper, we present a new method called "Dropout Regulation" (DR), which consists of automatically adjusting the dropout rate during training using a controller as used in automation.DR allows for a precise estimation of the uncertainty which is comparable to the state-of-the-art while remaining simple to implement.
Dissimilarity Clustering by Hierarchical Multi-Level Refinement
Conan-Guez, Brieuc, Rossi, Fabrice
We introduce in this paper a new way of optimizing the natural extension of the quantization error using in k-means clustering to dissimilarity data. The proposed method is based on hierarchical clustering analysis combined with multi-level heuristic refinement. The method is computationally efficient and achieves better quantization errors than the