Goto

Collaborating Authors

Results


Trust, Regulation, and Human-in-the-Loop AI

Communications of the ACM

Artificial intelligence (AI) systems employ learning algorithms that adapt to their users and environment, with learning either pre-trained or allowed to adapt during deployment. Because AI can optimize its behavior, a unit's factory model behavior can diverge after release, often at the perceived expense of safety, reliability, and human controllability. Since the Industrial Revolution, trust has ultimately resided in regulatory systems set up by governments and standards bodies. Research into human interactions with autonomous machines demonstrates a shift in the locus of trust: we must trust non-deterministic systems such as AI to self-regulate, albeit within boundaries. This radical shift is one of the biggest issues facing the deployment of AI in the European region.


State of AI Ethics Report (Volume 6, February 2022)

arXiv.org Artificial Intelligence

This report from the Montreal AI Ethics Institute (MAIEI) covers the most salient progress in research and reporting over the second half of 2021 in the field of AI ethics. Particular emphasis is placed on an "Analysis of the AI Ecosystem", "Privacy", "Bias", "Social Media and Problematic Information", "AI Design and Governance", "Laws and Regulations", "Trends", and other areas covered in the "Outside the Boxes" section. The two AI spotlights feature application pieces on "Constructing and Deconstructing Gender with AI-Generated Art" as well as "Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?". Given MAIEI's mission to democratize AI, submissions from external collaborators have featured, such as pieces on the "Challenges of AI Development in Vietnam: Funding, Talent and Ethics" and using "Representation and Imagination for Preventing AI Harms". The report is a comprehensive overview of what the key issues in the field of AI ethics were in 2021, what trends are emergent, what gaps exist, and a peek into what to expect from the field of AI ethics in 2022. It is a resource for researchers and practitioners alike in the field to set their research and development agendas to make contributions to the field of AI ethics.


Technology Ethics in Action: Critical and Interdisciplinary Perspectives

arXiv.org Artificial Intelligence

This special issue interrogates the meaning and impacts of "tech ethics": the embedding of ethics into digital technology research, development, use, and governance. In response to concerns about the social harms associated with digital technologies, many individuals and institutions have articulated the need for a greater emphasis on ethics in digital technology. Yet as more groups embrace the concept of ethics, critical discourses have emerged questioning whose ethics are being centered, whether "ethics" is the appropriate frame for improving technology, and what it means to develop "ethical" technology in practice. This interdisciplinary issue takes up these questions, interrogating the relationships among ethics, technology, and society in action. This special issue engages with the normative and contested notions of ethics itself, how ethics has been integrated with technology across domains, and potential paths forward to support more just and egalitarian technology. Rather than starting from philosophical theories, the authors in this issue orient their articles around the real-world discourses and impacts of tech ethics--i.e., tech ethics in action.


Is AI ageist? Researchers examine impact of technology on older users

#artificialintelligence

Researchers from the University of Toronto and University of Cambridge are looking into the ways ageism – prejudice against individuals based on age – can be encoded into technologies such as artificial intelligence, which many of us now encounter daily. This age-related bias in AI, also referred to as "digital ageism," is explored in a new paper led by Charlene Chu, an affiliate scientist at the Toronto Rehabilitation Institute's KITE research arm, part of the University Health Network (UHN), and an assistant professor at the Lawrence S. Bloomberg Faculty of Nursing. The paper was recently published in The Gerontologist, the leading journal of gerontology. "The COVID-19 pandemic has heightened our awareness of how dependent our society is on technology," says Chu says. "Huge numbers of older adults are turning to technology in their daily lives which has created a sense of urgency for researchers to try to understand digital ageism, and the risks and harms associated with AI biases."


Ethical and social risks of harm from Language Models

arXiv.org Artificial Intelligence

This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences. We outline six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms. The first area concerns the perpetuation of stereotypes, unfair discrimination, exclusionary norms, toxic language, and lower performance by social group for LMs. The second focuses on risks from private data leaks or LMs correctly inferring sensitive information. The third addresses risks arising from poor, false or misleading information including in sensitive domains, and knock-on risks such as the erosion of trust in shared information. The fourth considers risks from actors who try to use LMs to cause harm. The fifth focuses on risks specific to LLMs used to underpin conversational agents that interact with human users, including unsafe use, manipulation or deception. The sixth discusses the risk of environmental harm, job automation, and other challenges that may have a disparate effect on different social groups or communities. In total, we review 21 risks in-depth. We discuss the points of origin of different risks and point to potential mitigation approaches. Lastly, we discuss organisational responsibilities in implementing mitigations, and the role of collaboration and participation. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs.


The Impact of Tech in 2022

#artificialintelligence

Now is the time to upgrade our technologies, and in the year 2022, AI, ML, 5G, and Cloud Computing will be the most important technologies to emerge. The covid-19 pandemic will continue to have a wide-ranging influence on our life in 2022. As a result, the digitalization and virtualization of business and society will continue to increase. As we enter the new year, however, the demand for sustainability, ever-increasing data volumes, and faster computation and network speeds will reclaim their positions as the most essential drivers of digital transformation. IEEE has announced the conclusions of a new study of global technology executives from the United States, the United Kingdom, China, India, and Brazil titled "The Impact of Technology in 2022 and Beyond: an IEEE Global Study."


Learning from learning machines: a new generation of AI technology to meet the needs of science

arXiv.org Artificial Intelligence

We outline emerging opportunities and challenges to enhance the utility of AI for scientific discovery. The distinct goals of AI for industry versus the goals of AI for science create tension between identifying patterns in data versus discovering patterns in the world from data. If we address the fundamental challenges associated with "bridging the gap" between domain-driven scientific models and data-driven AI learning machines, then we expect that these AI models can transform hypothesis generation, scientific discovery, and the scientific process itself.


New Artificial Intelligence projects funded to tackle health inequalities

#artificialintelligence

NHSX' NHS AI Lab and the Health Foundation have today awarded £1.4m to four projects to address racial and ethnic health inequalities using artificial intelligence (AI). The winning projects range from using AI to investigate disparities in maternal health outcomes to developing standards and guidance to ensure that datasets for training and testing AI systems are inclusive and generalisable. The NHS AI Lab introduced the AI Ethics Initiative to support research and practical interventions that complement existing efforts to validate, evaluate and regulate AI-driven technologies in health and care, with a focus on countering health inequalities. Today's announcement is the result of the Initiative's partnership with The Health Foundation on a research competition, enabled by NIHR, to understand and enable opportunities to use AI to address inequalities and to optimise datasets and improve AI development, testing and deployment. 'As we strive to ensure NHS patients are amongst the first in the world to benefit from leading AI, we also have a responsibility to ensure those technologies don't exacerbate existing health inequalities.


Combatting UK health inequalities with Artificial Intelligence

#artificialintelligence

The £1.4m funding is financed by the NHS' AI Lab – called NHSX – and The Health Foundation, with the projects aiming to utilise AI to address racial and ethnic health inequalities in the UK. The selected initiatives will implement the technology in a broad range of investigations, from assessing disparities in maternal health outcomes to designing standards and guidance to ensure AI systems are inclusive and generalisable. The NHS AI lab introduced the AI Ethics Initiative in March 2021 to assist research and practical interventions that enhance existing efforts to validate, evaluate, and regulate AI-based technologies in the healthcare sector to mitigate health inequalities. This considerable funding results from their partnership with The Health Foundation on a research competition, which the NIHR enabled. The endeavour saw the organisations collaborate to explore and create opportunities to employ AI to address health inequalities and optimise datasets to improve AI's development, testing, and deployment.


Truthful AI: Developing and governing AI that does not lie

arXiv.org Artificial Intelligence

In many contexts, lying -- the use of verbal falsehoods to deceive -- is harmful. While lying has traditionally been a human affair, AI systems that make sophisticated verbal statements are becoming increasingly prevalent. This raises the question of how we should limit the harm caused by AI "lies" (i.e. falsehoods that are actively selected for). Human truthfulness is governed by social norms and by laws (against defamation, perjury, and fraud). Differences between AI and humans present an opportunity to have more precise standards of truthfulness for AI, and to have these standards rise over time. This could provide significant benefits to public epistemics and the economy, and mitigate risks of worst-case AI futures. Establishing norms or laws of AI truthfulness will require significant work to: (1) identify clear truthfulness standards; (2) create institutions that can judge adherence to those standards; and (3) develop AI systems that are robustly truthful. Our initial proposals for these areas include: (1) a standard of avoiding "negligent falsehoods" (a generalisation of lies that is easier to assess); (2) institutions to evaluate AI systems before and after real-world deployment; and (3) explicitly training AI systems to be truthful via curated datasets and human interaction. A concerning possibility is that evaluation mechanisms for eventual truthfulness standards could be captured by political interests, leading to harmful censorship and propaganda. Avoiding this might take careful attention. And since the scale of AI speech acts might grow dramatically over the coming decades, early truthfulness standards might be particularly important because of the precedents they set.