Goto

Collaborating Authors

Ethical AI and the importance of guidelines for algorithms -- explained – Ranzware Tech NEWS

#artificialintelligence

In October, Amazon had to discontinue an artificial intelligence–powered recruiting tool after it discovered the system was biased against female applicants. In 2016, a ProPublica investigation revealed a recidivism assessment tool that used machine learning was biased against black defendants. More recently, the US Department of Housing and Urban Development sued Facebook because its ad-serving algorithms enabled advertisers to discriminate based on characteristics like gender and race. And Google refrained from renewing its AI contract with the Department of Defense after employees raised ethical concerns. Those are just a few of the many ethical controversies surrounding artificial intelligence algorithms in the past few years.


Here are the 7 requirements for building ethical AI, according to the EU commission

#artificialintelligence

In October, Amazon had to discontinue an artificial intelligence–powered recruiting tool after it discovered the system was biased against female applicants. In 2016, a ProPublica investigation revealed a recidivism assessment tool that used machine learning was biased against black defendants. More recently, the US Department of Housing and Urban Development sued Facebook because its ad-serving algorithms enabled advertisers to discriminate based on characteristics like gender and race. And Google refrained from renewing its AI contract with the Department of Defense after employees raised ethical concerns. Those are just a few of the many ethical controversies surrounding artificial intelligence algorithms in the past few years.


Bayesian Adversarial Learning

Neural Information Processing Systems

Deep neural networks have been known to be vulnerable to adversarial attacks, raising lots of security concerns in the practical deployment. Popular defensive approaches can be formulated as a (distributionally) robust optimization problem, which minimizes a "point estimate" of worst-case loss derived from either perdatum perturbationor adversary data-generating distribution within certain predefined constraints.This point estimate ignores potential test adversaries that are beyond the predefined constraints. The model robustness might deteriorate sharply in the scenario of stronger test adversarial data. In this work, a novel robust training framework is proposed to alleviate this issue, Bayesian Robust Learning, in which a distribution is put on the adversarial data-generating distribution to account for the uncertainty of the adversarial data-generating process. The uncertainty directly helps to consider the potential adversaries that are stronger than the point estimate in the cases of distributionally robust optimization. The uncertainty of model parameters is also incorporated to accommodate the full Bayesian framework. We design a scalable Markov Chain Monte Carlo sampling strategy to obtain the posterior distribution over model parameters. Various experiments are conducted to verify the superiority of BAL over existing adversarial training methods. The code for BAL is available at https://tinyurl.com/ycxsaewr.


Beyond Explainability: Leveraging Interpretability for Improved Adversarial Learning

arXiv.org Machine Learning

In this study, we propose the leveraging of interpretability for tasks beyond purely the purpose of explainability. In particular, this study puts forward a novel strategy for leveraging gradient-based interpretability in the realm of adversarial examples, where we use insights gained to aid adversarial learning. More specifically, we introduce the concept of spatially constrained one-pixel adversarial perturbations, where we guide the learning of such adversarial perturbations towards more susceptible areas identified via gradient-based interpretability. Experimental results using different benchmark datasets show that such a spatially constrained one-pixel adversarial perturbation strategy can noticeably improve the speed of convergence as well as produce successful attacks that were also visually difficult to perceive, thus illustrating an effective use of interpretability methods for tasks outside of the purpose of purely explainability.


RL-Based Method for Benchmarking the Adversarial Resilience and Robustness of Deep Reinforcement Learning Policies

arXiv.org Artificial Intelligence

This paper investigates the resilience and robustness of Deep Reinforcement Learning (DRL) policies to adversarial perturbations in the state space. Accordingly, we first present an approach for the disentanglement of vulnerabilities caused by representation learning of DRL agents from those that stem from the sensitivity of the DRL policies to distributional shifts in state transitions. Building on this approach, we propose two RL-based techniques for quantitative benchmarking of adversarial resilience and robustness in DRL policies against perturbations of state transitions. We demonstrate the feasibility of our proposals through experimental evaluation of resilience and robustness in DQN, A2C, and PPO2 policies trained in the Cartpole environment.