Goto

Collaborating Authors

 audit framework


An Audit Framework for Adopting AI-Nudging on Children

Ganapini, Marianna, Panai, Enrico

arXiv.org Artificial Intelligence

This is an audit framework for AI-nudging. Unlike the static form of nudging usually discussed in the literature, we focus here on a type of nudging that uses large amounts of data to provide personalized, dynamic feedback and interfaces. We call this AI-nudging (Lanzing, 2019, p. 549; Yeung, 2017). The ultimate goal of the audit outlined here is to ensure that an AI system that uses nudges will maintain a level of moral inertia and neutrality by complying with the recommendations, requirements, or suggestions of the audit (in other words, the criteria of the audit). In the case of unintended negative consequences, the audit suggests risk mitigation mechanisms that can be put in place. In the case of unintended positive consequences, it suggests some reinforcement mechanisms. Sponsored by the IBM-Notre Dame Tech Ethics Lab


Understanding Multilevel Models(Artficial Intelligence)

#artificialintelligence

Abstract: Multilevel linear models allow flexible statistical modelling of complex data with different levels of stratification. Identifying the most appropriate model from the large set of possible candidates is a challenging problem. In the Bayesian setting, the standard approach is a comparison of models using the model evidence or the Bayes factor. However, in all but the simplest of cases, direct computation of these quantities is impossible. Markov Chain Monte Carlo approaches are widely used, such as sequential Monte Carlo, but it is not always clear how well such techniques perform in practice.


Auditing Robot Learning for Safety and Compliance during Deployment

Bharadhwaj, Homanga

arXiv.org Artificial Intelligence

Robots of the future are going to exhibit increasingly human-like and super-human intelligence in a myriad of different tasks. They are also likely going to fail and be incompliant with human preferences in increasingly subtle ways. Towards the goal of achieving autonomous robots, the robot learning community has made rapid strides in applying machine learning techniques to train robots through data and interaction. This makes the study of how best to audit these algorithms for checking their compatibility with humans, pertinent and urgent. In this paper, we draw inspiration from the AI Safety and Alignment communities and make the case that we need to urgently consider ways in which we can best audit our robot learning algorithms to check for failure modes, and ensure that when operating autonomously, they are indeed behaving in ways that the human algorithm designers intend them to. We believe that this is a challenging problem that will require efforts from the entire robot learning community, and do not attempt to provide a concrete framework for auditing. Instead, we outline high-level guidance and a possible approach towards formulating this framework which we hope will serve as a useful starting point for thinking about auditing in the context of robot learning.


What to know about the EU's facial recognition regulation

#artificialintelligence

The European Commission's (EC) proposed Artificial Intelligence (AI) regulation – a much-awaited piece of legislation – is out. While this text must still go through consultations within the EU before its adoption, the proposal already provides a good sense of how the EU considers the development of AI within the years to come: by following a risk-based approach to regulation. Other use-cases such as FRT for authentication processes are not part of the list of high-level risks and thus should require a lighter level of regulation. While technology providers have to maintain the highest level of performance and accuracy of their systems, this necessary step isn't the most critical to prevent harm. The EC doesn't detail any threshold of accuracy to meet, but rather requires a robust and documented risk-mitigation process designed to prevent harm.