Goto

Collaborating Authors

 network prediction


Deliberative Explanations: visualizing network insecurities

Neural Information Processing Systems

A new approach to explainable AI, denoted {\it deliberative explanations,\/} is proposed. Deliberative explanations are a visualization technique that aims to go beyond the simple visualization of the image regions (or, more generally, input variables) responsible for a network prediction. Instead, they aim to expose the deliberations carried by the network to arrive at that prediction, by uncovering the insecurities of the network about the latter. The explanation consists of a list of insecurities, each composed of 1) an image region (more generally, a set of input variables), and 2) an ambiguity formed by the pair of classes responsible for the network uncertainty about the region. Since insecurity detection requires quantifying the difficulty of network predictions, deliberative explanations combine ideas from the literatures on visual explanations and assessment of classification difficulty.


A data-driven two-microphone method for in-situ sound absorption measurements

Emmerich, Leon, Aste, Patrik, Brandão, Eric, Nolan, Mélanie, Cuenca, Jacques, Svensson, U. Peter, Maeder, Marcus, Marburg, Steffen, Zea, Elias

arXiv.org Artificial Intelligence

This work presents a data-driven approach to estimating the sound absorption coefficient of an infinite porous slab using a neural network and a two-microphone measurement on a finite porous sample. A 1D-convolutional network predicts the sound absorption coefficient from the complex-valued transfer function between the sound pressure measured at the two microphone positions. The network is trained and validated with numerical data generated by a boundary element model using the Delany-Bazley-Miki model, demonstrating accurate predictions for various numerical samples. The method is experimentally validated with baffled rectangular samples of a fibrous material, where sample size and source height are varied. The results show that the neural network offers the possibility to reliably predict the in-situ sound absorption of a porous material using the traditional two-microphone method as if the sample were infinite. The normal-incidence sound absorption coefficient obtained by the network compares well with that obtained theoretically and in an impedance tube. The proposed method has promising perspectives for estimating the sound absorption coefficient of acoustic materials after installation and in realistic operational conditions.


Enhancing Exploration Efficiency using Uncertainty-Aware Information Prediction

Kim, Seunghwan, Shin, Heejung, Yim, Gaeun, Kim, Changseung, Oh, Hyondong

arXiv.org Artificial Intelligence

Autonomous exploration is a crucial aspect of robotics, enabling robots to explore unknown environments and generate maps without prior knowledge. This paper proposes a method to enhance exploration efficiency by integrating neural network-based occupancy grid map prediction with uncertainty-aware Bayesian neural network. Uncertainty from neural network-based occupancy grid map prediction is probabilistically integrated into mutual information for exploration. To demonstrate the effectiveness of the proposed method, we conducted comparative simulations within a frontier exploration framework in a realistic simulator environment against various information metrics. The proposed method showed superior performance in terms of exploration efficiency.


Network scaling and scale-driven loss balancing for intelligent poroelastography

Xu, Yang, Pourahmadian, Fatemeh

arXiv.org Artificial Intelligence

A deep learning framework is developed for multiscale characterization of poroelastic media from full waveform data which is known as poroelastography. Special attention is paid to heterogeneous environments whose multiphase properties may drastically change across several scales. Described in space-frequency, the data takes the form of focal solid displacement and pore pressure fields in various neighborhoods furnished either by reconstruction from remote data or direct measurements depending on the application. The objective is to simultaneously recover the six hydromechanical properties germane to Biot equations and their spatial distribution in a robust and efficient manner. Two major challenges impede direct application of existing state-of-the-art techniques for this purpose: (i) the sought-for properties belong to vastly different and potentially uncertain scales, and~(ii) the loss function is multi-objective and multi-scale (both in terms of its individual components and the total loss). To help bridge the gap, we propose the idea of \emph{network scaling} where the neural property maps are constructed by unit shape functions composed into a scaling layer. In this model, the unknown network parameters (weights and biases) remain of O(1) during training. This forms the basis for explicit scaling of the loss components and their derivatives with respect to the network parameters. Thereby, we propose the physics-based \emph{dynamic scaling} approach for adaptive loss balancing. The idea is first presented in a generic form for multi-physics and multi-scale PDE systems, and then applied through a set of numerical experiments to poroelastography. The results are presented along with reconstructions by way of gradient normalization (GradNorm) and Softmax adaptive weights (SoftAdapt) for loss balancing. A comparative analysis of the methods and corresponding results is provided.


Deliberative Explanations: visualizing network insecurities

Neural Information Processing Systems

A new approach to explainable AI, denoted {\it deliberative explanations,\/} is proposed. Deliberative explanations are a visualization technique that aims to go beyond the simple visualization of the image regions (or, more generally, input variables) responsible for a network prediction. Instead, they aim to expose the deliberations carried by the network to arrive at that prediction, by uncovering the insecurities of the network about the latter. The explanation consists of a list of insecurities, each composed of 1) an image region (more generally, a set of input variables), and 2) an ambiguity formed by the pair of classes responsible for the network uncertainty about the region. Since insecurity detection requires quantifying the difficulty of network predictions, deliberative explanations combine ideas from the literatures on visual explanations and assessment of classification difficulty.


Evaluation of autonomous systems under data distribution shifts

Sikar, Daniel, Garcez, Artur

arXiv.org Artificial Intelligence

We posit that data can only be safe to use up to a certain threshold of the data distribution shift, after which control Zhang et al. [39] debated the need to rethink generalization, must be relinquished by the autonomous system and operation by demonstrating how traditional benchmarking approaches halted or handed to a human operator. With the use of a fail to explain why large neural networks generalize computer vision toy example we demonstrate that network well in practice. By randomizing target labels, the experiments predictive accuracy is impacted by data distribution shifts show that state-of-the-art convolutional neural networks and propose distance metrics between training and testing for image classification trained with SGD (stochastic data to define safe operation limits within said shifts. We gradient descent) are large enough to fit a random labelling conclude that beyond an empirically obtained threshold of of the training data. This is achieved with a simple twolayer the data distribution shift, it is unreasonable to expect network neural network, which presents a "perfect finite sample predictive accuracy not to degrade.


Deep Neural Networks Tend To Extrapolate Predictably

Kang, Katie, Setlur, Amrith, Tomlin, Claire, Levine, Sergey

arXiv.org Artificial Intelligence

The prevailing belief in machine learning posits that deep neural networks behave erratically when presented with out-of-distribution (OOD) inputs, often yielding predictions that are not only incorrect, but incorrect with high confidence [19, 37]. However, there is some evidence which seemingly contradicts this conventional wisdom - for example, Hendrycks and Gimpel [24] show that the softmax probabilities outputted by neural network classifiers actually tend to be less confident on OOD inputs, making them surprisingly effective OOD detectors. In our work, we find that this softmax behavior may be reflective of a more general pattern in the way neural networks extrapolate: as inputs diverge further from the training distribution, a neural network's predictions often converge towards a fixed constant value. Moreover, this constant value often approximates the best prediction the network can produce without observing any inputs, which we refer to as the optimal constant solution (OCS). We call this the "reversion to the OCS" hypothesis: Neural networks predictions on high-dimensional OOD inputs tend to revert towards the optimal constant solution.


Mitigating Adversarial Vulnerability through Causal Parameter Estimation by Adversarial Double Machine Learning

Lee, Byung-Kwan, Kim, Junho, Ro, Yong Man

arXiv.org Artificial Intelligence

Adversarial examples derived from deliberately crafted perturbations on visual inputs can easily harm decision process of deep neural networks. To prevent potential threats, various adversarial training-based defense methods have grown rapidly and become a de facto standard approach for robustness. Despite recent competitive achievements, we observe that adversarial vulnerability varies across targets and certain vulnerabilities remain prevalent. Intriguingly, such peculiar phenomenon cannot be relieved even with deeper architectures and advanced defense methods. To address this issue, in this paper, we introduce a causal approach called Adversarial Double Machine Learning (ADML), which allows us to quantify the degree of adversarial vulnerability for network predictions and capture the effect of treatments on outcome of interests. ADML can directly estimate causal parameter of adversarial perturbations per se and mitigate negative effects that can potentially damage robustness, bridging a causal perspective into the adversarial vulnerability. Through extensive experiments on various CNN and Transformer architectures, we corroborate that ADML improves adversarial robustness with large margins and relieve the empirical observation.


A Deep Double Ritz Method (D$^2$RM) for solving Partial Differential Equations using Neural Networks

Uriarte, Carlos, Pardo, David, Muga, Ignacio, Muñoz-Matute, Judit

arXiv.org Artificial Intelligence

Residual minimization is a widely used technique for solving Partial Differential Equations in variational form. It minimizes the dual norm of the residual, which naturally yields a saddle-point (min-max) problem over the so-called trial and test spaces. In the context of neural networks, we can address this min-max approach by employing one network to seek the trial minimum, while another network seeks the test maximizers. However, the resulting method is numerically unstable as we approach the trial solution. To overcome this, we reformulate the residual minimization as an equivalent minimization of a Ritz functional fed by optimal test functions computed from another Ritz functional minimization. We call the resulting scheme the Deep Double Ritz Method (D$^2$RM), which combines two neural networks for approximating trial functions and optimal test functions along a nested double Ritz minimization strategy. Numerical results on different diffusion and convection problems support the robustness of our method, up to the approximation properties of the networks and the training capacity of the optimizers.


Continual Adaptation of Semantic Segmentation using Complementary 2D-3D Data Representations

Frey, Jonas, Blum, Hermann, Milano, Francesco, Siegwart, Roland, Cadena, Cesar

arXiv.org Artificial Intelligence

Semantic segmentation networks are usually pre-trained once and not updated during deployment. As a consequence, misclassifications commonly occur if the distribution of the training data deviates from the one encountered during the robot's operation. We propose to mitigate this problem by adapting the neural network to the robot's environment during deployment, without any need for external supervision. Leveraging complementary data representations, we generate a supervision signal, by probabilistically accumulating consecutive 2D semantic predictions in a volumetric 3D map. We then train the network on renderings of the accumulated semantic map, effectively resolving ambiguities and enforcing multi-view consistency through the 3D representation. In contrast to scene adaptation methods, we aim to retain the previously-learned knowledge, and therefore employ a continual learning experience replay strategy to adapt the network. Through extensive experimental evaluation, we show successful adaptation to real-world indoor scenes both on the ScanNet dataset and on in-house data recorded with an RGB-D sensor. Our method increases the segmentation accuracy on average by 9.9% compared to the fixed pre-trained neural network, while retaining knowledge from the pre-training dataset.