uncertainty mechanism
Approaching Neural Network Uncertainty Realism
Sicking, Joachim, Kister, Alexander, Fahrland, Matthias, Eickeler, Stefan, Hüger, Fabian, Rüping, Stefan, Schlicht, Peter, Wirtz, Tim
Statistical models are inherently uncertain. Quantifying or at least upper-bounding their uncertainties is vital for safety-critical systems such as autonomous vehicles. While standard neural networks do not report this information, several approaches exist to integrate uncertainty estimates into them. Assessing the quality of these uncertainty estimates is not straightforward, as no direct ground truth labels are available. Instead, implicit statistical assessments are required. For regression, we propose to evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test. An empirical evaluation reveals the need for uncertainty measures that are appropriate to upper-bound heavy-tailed empirical errors. Alongside, we transfer the variational U-Net classification architecture to standard supervised image-to-image tasks. We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
- North America > United States > Indiana > Lake County > Munster (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > Germany (0.04)
- Asia > Middle East > Jordan (0.04)
- Automobiles & Trucks (1.00)
- Transportation > Ground > Road (0.68)
- Information Technology > Robotics & Automation (0.47)
A Novel Regression Loss for Non-Parametric Uncertainty Optimization
Sicking, Joachim, Akila, Maram, Pintz, Maximilian, Wirtz, Tim, Fischer, Asja, Wrobel, Stefan
Quantification of uncertainty is one of the most promising approaches to establish safe machine learning. Despite its importance, it is far from being generally solved, especially for neural networks. One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice. However, it can underestimate the uncertainty. We propose a new objective, referred to as second-moment loss (SML), to address this issue. While the full network is encouraged to model the mean, the dropout networks are explicitly used to optimize the model variance. We intensively study the performance of the new objective on various UCI regression datasets. Comparing to the state-of-the-art of deep ensembles, SML leads to comparable prediction accuracies and uncertainty estimates while only requiring a single model. Under distribution shift, we observe moderate improvements. As a side result, we introduce an intuitive Wasserstein distance-based uncertainty measure that is non-saturating and thus allows to resolve quality differences between any two uncertainty estimates.
- North America > United States (0.29)
- Europe > Germany (0.28)
- Materials > Chemicals > Industrial Gases > Liquified Gas (0.68)
- Materials > Chemicals > Commodity Chemicals > Petrochemicals > LNG (0.68)
- Energy > Oil & Gas > Midstream (0.68)
- (2 more...)