Goto

Collaborating Authors

 paternalism


Nudging Consent and the New Opt Out System to the Processing of Health Data in England

Meszaros, Janos, Ho, Chih-hsing, Compagnucci, Marcelo Corrales

arXiv.org Artificial Intelligence

This chapter examines the challenges of the revised opt out system and the secondary use of health data in England. The analysis of this data could be very valuable for science and medical treatment as well as for the discovery of new drugs. For this reason, the UK government established the care.data program in 2013. The aim of the project was to build a central nationwide database for research and policy planning. However, the processing of personal data was planned without proper public engagement. Research has suggested that IT companies, such as in the Google DeepMind deal case, had access to other kinds of sensitive data and failed to comply with data protection law. Since May 2018, the government has launched the national data opt out system with the hope of regaining public trust. Nevertheless, there are no evidence of significant changes in the ND opt out, compared to the previous opt out system. Neither in the use of secondary data, nor in the choices that patients can make. The only notorious difference seems to be in the way that these options are communicated and framed to the patients. Most importantly, according to the new ND opt out, the type 1 opt out option, which is the only choice that truly stops data from being shared outside direct care, will be removed in 2020. According to the Behavioral Law and Economics literature (Nudge Theory), default rules, such as the revised opt out system in England, are very powerful, because people tend to stick to the default choices made readily available to them. The crucial question analyzed in this chapter is whether it is desirable for the UK government to stop promoting the type 1 opt outs, and whether this could be seen as a kind of hard paternalism.


Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems

London, Alex John, Heidari, Hoda

arXiv.org Artificial Intelligence

The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum's capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders' ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally permissible interactions between AI systems and those impacted by their functioning, and two sufficient conditions for realizing the ideal of meaningful benefit. We then contrast this ideal with several salient failure modes, namely, forms of social interactions that constitute unjustified paternalism, coercion, deception, exploitation and domination. The proliferation of incidents involving AI in high-stakes domains underscores the gravity of these issues and the imperative to take an ethics-led approach to AI systems from their inception.


Artificial intelligence is infiltrating health care. We shouldn't let it make all the decisions.

MIT Technology Review

AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK. "Sometimes we don't actually know what kinds of systems are being used," says Wachter.


Can Artificial Intelligence Increase Our Morality?

#artificialintelligence

In discussions of AI ethics, there's a lot of talk of designing "ethical" algorithms, those that produce behaviors we like. People have called for software that treats people fairly, that avoids violating privacy, that cedes to humanity decisions about who should live and die. But what about AI that benefits humans' morality, our own capacity to behave virtuously? That's the subject of a talk on "AI and Moral Self-Cultivation" given last week by Shannon Vallor, a philosopher at Santa Clara University who studies technology and ethics. The talk was part of a meeting on "Character, Social Connections and Flourishing in the 21st Century," hosted by Templeton World Charity Foundation, in Nassau, The Bahamas.


Can Artificial Intelligence Increase Our Morality?

#artificialintelligence

In discussions of AI ethics, there's a lot of talk of designing "ethical" algorithms, those that produce behaviors we like. People have variously called for software that treats people fairly, that avoids violating privacy, that cedes to humanity decisions about who should live and die. But what about AI that benefits humans' morality, our own capacity to behave virtuously? That's the subject of a talk on "AI and Moral Self-Cultivation" given last week by Shannon Vallor, a philosopher at Santa Clara University who studies technology and ethics. The talk was part of a meeting on "Character, Social Connections and Flourishing in the 21st Century," hosted by Templeton World Charity Foundation, in Nassau, The Bahamas.