Goto

Collaborating Authors

 ethical evaluation


A Conceptual Framework for Ethical Evaluation of Machine Learning Systems

Gupta, Neha R., Hullman, Jessica, Subramonyam, Hari

arXiv.org Artificial Intelligence

Research in Responsible AI has developed a range of principles and practices to ensure that machine learning systems are used in a manner that is ethical and aligned with human values. However, a critical yet often neglected aspect of ethical ML is the ethical implications that appear when designing evaluations of ML systems. For instance, teams may have to balance a trade-off between highly informative tests to ensure downstream product safety, with potential fairness harms inherent to the implemented testing procedures. We conceptualize ethics-related concerns in standard ML evaluation techniques. Specifically, we present a utility framework, characterizing the key trade-off in ethical evaluation as balancing information gain against potential ethical harms. The framework is then a tool for characterizing challenges teams face, and systematically disentangling competing considerations that teams seek to balance. Differentiating between different types of issues encountered in evaluation allows us to highlight best practices from analogous domains, such as clinical trials and automotive crash testing, which navigate these issues in ways that can offer inspiration to improve evaluation processes in ML. Our analysis underscores the critical need for development teams to deliberately assess and manage ethical complexities that arise during the evaluation of ML systems, and for the industry to move towards designing institutional policies to support ethical evaluations.


A Logic-based Multi-agent System for Ethical Monitoring and Evaluation of Dialogues

Dyoub, Abeer, Costantini, Stefania, Letteri, Ivan, Lisi, Francesca A.

arXiv.org Artificial Intelligence

Dialogue Systems are tools designed for various practical purposes concerning human-machine interaction. These systems should be built on ethical foundations because their behavior may heavily influence a user (think especially about children). The primary objective of this paper is to present the architecture and prototype implementation of a Multi Agent System (MAS) designed for ethical monitoring and evaluation of a dialogue system. A prototype application, for monitoring and evaluation of chatting agents' (human/artificial) ethical behavior in an online customer service chat point w.r.t their institution/company's codes of ethics and conduct, is developed and presented. Future work and open issues with this research are discussed.


Guiding the Ethics of Artificial Intelligence

#artificialintelligence

This blog post is adapted from our June 10 response to the National Institute of Standards and Technology's (NIST) request for information (RFI) 2019-08818: Developing a Federal AI Standards Engagement Plan. This RFI was released in response to an Executive Order directing NIST to create a plan for the development of a set of standards for the acceptable use of AI technologies. Given the wide adoption of AI technologies and the lag in commensurate laws and regulations, this post aims to help NIST by highlighting the current state, plans, challenges, and opportunities in ethics and AI. In 2016 the European Union (EU) created the General Data Protection Regulation (GDPR) that would expand protections around EU citizens' personal data beginning in 2018. Meanwhile, China has extensively integrated AI technologies into their government and social structure via the China Social Credit System.