An Audit Framework for Adopting AI-Nudging on Children
Ganapini, Marianna, Panai, Enrico
–arXiv.org Artificial Intelligence
This is an audit framework for AI-nudging. Unlike the static form of nudging usually discussed in the literature, we focus here on a type of nudging that uses large amounts of data to provide personalized, dynamic feedback and interfaces. We call this AI-nudging (Lanzing, 2019, p. 549; Yeung, 2017). The ultimate goal of the audit outlined here is to ensure that an AI system that uses nudges will maintain a level of moral inertia and neutrality by complying with the recommendations, requirements, or suggestions of the audit (in other words, the criteria of the audit). In the case of unintended negative consequences, the audit suggests risk mitigation mechanisms that can be put in place. In the case of unintended positive consequences, it suggests some reinforcement mechanisms. Sponsored by the IBM-Notre Dame Tech Ethics Lab
arXiv.org Artificial Intelligence
Apr-25-2023
- Country:
- Europe
- France (0.04)
- Italy (0.04)
- United Kingdom
- England > Oxfordshire
- Oxford (0.04)
- Wales (0.04)
- England > Oxfordshire
- North America
- Canada > Quebec
- Montreal (0.04)
- United States
- Indiana > St. Joseph County
- Notre Dame (0.04)
- New York (0.04)
- Indiana > St. Joseph County
- Canada > Quebec
- Europe
- Genre:
- Research Report (1.00)
- Industry:
- Education (1.00)
- Government (1.00)
- Health & Medicine
- Consumer Health (0.67)
- Therapeutic Area > Psychiatry/Psychology (1.00)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Leisure & Entertainment > Games
- Computer Games (0.94)
- Technology:
- Information Technology
- Artificial Intelligence
- Cognitive Science (0.68)
- Machine Learning (0.93)
- Representation & Reasoning > Agents (1.00)
- Communications > Social Media (1.00)
- Data Science > Data Mining (1.00)
- Artificial Intelligence
- Information Technology