Burton, Emanuelle


The Heart of the Matter: Patient Autonomy as a Model for the Wellbeing of Technology Users

AAAI Conferences

We draw on concepts in medical ethics to consider how computer science, and AI in particular, can develop critical tools for thinking concretely about technology's impact on the wellbeing of the people who use it. We focus on patient autonomy---the ability to set the terms of one’s encounter with medicine---and on the mediating concepts of informed consent and decisional capacity, which enable doctors to honor patients' autonomy in messy and non-ideal circumstances. This comparative study is organized around a fictional case study of a heart patient with cardiac implants. Using this case study, we identify points of overlap and of difference between medical ethics and technology ethics, and leverage a discussion of that intertwined scenario to offer initial practical suggestions about how we can adapt the concepts of decisional capacity and informed consent to the discussion of technology design.



Why Teaching Ethics to AI Practitioners Is Important

AAAI Conferences

We argue that it is crucial to the future of AI that our students be trained in multiple complementary modes of ethical reasoning, so that they may make ethical design and implementation choices, ethical career decisions, and that their software will be programmed to take into account the complexities of acting ethically in the world.


Why Teaching Ethics to AI Practitioners Is Important

AAAI Conferences

We argue that it is crucial to the future of AI that our students be trained in multiple complementary modes of ethical reasoning, so that they may make ethical design and implementation choices, ethical career decisions, and that their software will be programmed to take into account the complexities of acting ethically in the world.


Using "The Machine Stops" for Teaching Ethics in Artificial Intelligence and Computer Science

AAAI Conferences

A key front for ethical questions in artificial intelligence, and computer science more generally, is teaching students how to engage with the questions they will face in their professional careers based on the tools and technologies we teach them.  In past work (and current teaching) we have advocated for the use of science fiction as an appropriate tool which enables AI researchers to engage students and the public on the current state and potential impacts of AI. We present teaching suggestions for E.M. Forster's 1909 story, "The Machine Stops," to teach topics in computer ethics.  In particular, we use the story to examine ethical issues related to being constantly available for remote contact, physically isolated, and dependent on a machine --- all without mentioning computer games or other media to which students have strong emotional associations. We give a high-level view of common ethical theories and indicate how they inform the questions raised by the story and afford a structure for thinking about how to address them.


Teaching AI Ethics Using Science Fiction

AAAI Conferences

The cultural and political implications of modern AI research are not some far off concern, they are things that affect the world in the here and now. From advanced control systems with advanced visualizations and image processing techniques that drive the machines of the modern military to the slow creep of a mechanized workforce, ethical questions surround us. Part of dealing with these ethical questions is not just speculating on what could be but teaching our students how to engage with these ethical questions. We explore the use of science fiction as an appropriate tool to enable AI researchers to help engage students and the public on the current state and potential impacts of AI.