Learning Resilient Behaviors for Navigation Under Uncertainty Environments

arXiv.org Artificial Intelligence

-- Deep reinforcement learning has great potential to acquire complex, adaptive behaviors for autonomous agents automatically. However, the underlying neural network polices have not been widely deployed in real-world applications, especially in these safety-critical tasks (e.g., autonomous driving). One of the reasons is that the learned policy cannot perform flexible and resilient behaviors as traditional methods to adapt to diverse environments. In this paper, we consider the problem that a mobile robot learns adaptive and resilient behaviors for navigating in unseen uncertain environments while avoiding collisions. We present a novel approach for uncertainty-aware navigation by introducing an uncertainty-aware predictor to model the environmental uncertainty, and we propose a novel uncertainty-aware navigation network to learn resilient behaviors in the prior unknown environments. T o train the proposed uncertainty-aware network more stably and efficiently, we present the temperature decay training paradigm, which balances exploration and exploitation during the training process. Our experimental evaluation demonstrates that our approach can learn resilient behaviors in diverse environments and generate adaptive trajectories according to environmental uncertainties. Videos of the experiments are available at https://sites.google.com/view/resilient-nav/ . With the recent progress of machine learning techniques, deep reinforcement learning has been seen as a promising technique for autonomous systems to learn intelligent and complex behaviors in manipulation and motion planning tasks [1]-[3].


What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?

Neural Information Processing Systems

There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.


Multivariate Uncertainty in Deep Learning

arXiv.org Machine Learning

--Deep learning is increasingly used for state estimation problems such as tracking, navigation, and pose estimation. The uncertainties associated with these measurements are typically assumed to be a fixed covariance matrix. For many scenarios this assumption is inaccurate, leading to worse subsequent filtered state estimates. We show how to model multivariate uncertainty for regression problems with neural networks, incorporating both aleatoric and epistemic sources of heteroscedastic uncertainty. We train a deep uncertainty covariance matrix model in two ways: directly using a multivariate Gaussian density loss function, and indirectly using end-to-end training through a Kalman filter . We experimentally show in a visual tracking problem the large impact that accurate multivariate uncertainty quantification can have on Kalman filter estimation for both in-domain and out-of- domain evaluation data.


Bayesian Linear Regression on Deep Representations

arXiv.org Machine Learning

A simple approach to obtaining uncertainty-aware neural networks for regression is to do Bayesian linear regression (BLR) on the representation from the last hidden layer. Recent work [Riquelme et al., 2018, Azizzadenesheli et al., 2018] indicates that the method is promising, though it has been limited to homoscedastic noise. In this paper, we propose a novel variation that enables the method to flexibly model heteroscedastic noise. The method is benchmarked against two prominent alternative methods on a set of standard datasets, and finally evaluated as an uncertainty-aware model in model-based reinforcement learning. Our experiments indicate that the method is competitive with standard ensembling, and ensembles of BLR outperforms the methods we compared to.


Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning

arXiv.org Machine Learning

Bayesian neural networks with latent variables (BNNs+LVs) are scalable and flexible probabilistic models: They account for uncertainty in the estimation of the network weights and, by making use of latent variables, they can capture complex noise patterns in the data. In this work, we show how to separate these two forms of uncertainty for decision-making purposes. This decomposition allows us to successfully identify informative points for active learning of functions with heteroskedastic and bimodal noise. We also demonstrate how this decomposition allows us to define a novel risk-sensitive reinforcement learning criterion to identify policies that balance expected cost, model-bias and noise averseness.