Goto

Collaborating Authors

 rssa


On the Similarity of Deep Learning Representations Across Didactic and Adversarial Examples

Douglas, Pk, Farahani, Farzad Vasheghani

arXiv.org Artificial Intelligence

The increasing use of deep neural networks (DNNs) has motivated a parallel endeavor: the design of adversaries that profit from successful misclassifications. However, not all adversarial examples are crafted for malicious purposes. For example, real world systems often contain physical, temporal, and sampling variability across instrumentation. Adversarial examples in the wild may inadvertently prove deleterious for accurate predictive modeling. Conversely, naturally occurring covariance of image features may serve didactic purposes. Here, we studied the stability of deep learning representations for neuroimaging classification across didactic and adversarial conditions characteristic of MRI acquisition variability. We show that representational similarity and performance vary according to the frequency of adversarial examples in the input space.


Persistently Feasible Robust Safe Control by Safety Index Synthesis and Convex Semi-Infinite Programming

Wei, Tianhao, Kang, Shucheng, Zhao, Weiye, Liu, Changliu

arXiv.org Artificial Intelligence

Model mismatches prevail in real-world applications. Ensuring safety for systems with uncertain dynamic models is critical. However, existing robust safe controllers may not be realizable when control limits exist. And existing methods use loose over-approximation of uncertainties, leading to conservative safe controls. To address these challenges, we propose a control-limits aware robust safe control framework for bounded state-dependent uncertainties. We propose safety index synthesis to find a robust safe controller guaranteed to be realizable under control limits. And we solve for robust safe control via Convex Semi-Infinite Programming, which is the tightest formulation for convex bounded uncertainties and leads to the least conservative control. In addition, we analyze when and how safety can be preserved under unmodeled uncertainties. Experiment results show that our robust safe controller is always realizable under control limits and is much less conservative than strong baselines.