Goto

Collaborating Authors

 ethical oversight


EthicAlly: a Prototype for AI-Powered Research Ethics Support for the Social Sciences and Humanities

Grohmann, Steph

arXiv.org Artificial Intelligence

In biomedical science, review by a Research Ethics Committee (REC) is an indispensable way of protecting human subjects from harm. However, in social science and the humanities, mandatory ethics compliance has long been met with scepticism as biomedical models of ethics can map poorly onto methodologies involving complex socio-political and cultural considerations. As a result, tailored ethics training and support as well as access to RECs with the necessary expertise is lacking in some areas, including parts of Europe and low- and middle-income countries. This paper suggests that Generative AI can meaningfully contribute to closing these gaps, illustrating this claim by presenting EthicAlly, a proof-of-concept prototype for an AI-powered ethics support system for social science and humanities researchers. Drawing on constitutional AI technology and a collaborative prompt development methodology, EthicAlly provides structured ethics assessment that incorporates both universal ethics principles and contextual and interpretive considerations relevant to most social science research. In supporting researchers in ethical research design and preparation for REC submission, this kind of system can also contribute to easing the burden on institutional RECs, without attempting to automate or replace human ethical oversight.


Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance

Tallam, Krti

arXiv.org Artificial Intelligence

This paper examines the intricate interplay among AI safety, security, and governance by integrating technical systems engineering with principles of moral imagination and ethical philosophy. Drawing on foundational insights from Weapons of Math Destruction and Thinking in Systems alongside contemporary debates in AI ethics, we develop a comprehensive multi-dimensional framework designed to regulate AI technologies deployed in high-stakes domains such as defense, finance, healthcare, and education. Our approach combines rigorous technical analysis, quantitative risk assessment, and normative evaluation to expose systemic vulnerabilities inherent in opaque, black-box models. Detailed case studies, including analyses of Microsoft Tay (2016) and the UK A-Level Grading Algorithm (2020), demonstrate how security lapses, bias amplification, and lack of accountability can precipitate cascading failures that undermine public trust. We conclude by outlining targeted strategies for enhancing AI resilience through adaptive regulatory mechanisms, robust security protocols, and interdisciplinary oversight, thereby advancing the state of the art in ethical and technical AI governance.


AI Act and Large Language Models (LLMs): When critical issues and privacy impact require human and ethical oversight

Fabiano, Nicola

arXiv.org Artificial Intelligence

On March 13, 2024, the European Parliament approved the final version of the European Artificial Intelligence Act (AI Act), and its publication in the Official Journal of the European Union is awaited. The AI Act is a long text comprising 180 recitals, XIII chapters with 113 articles, and XIII annexes. It is an essential legal framework for AI and the first comprehensive legislation on AI.


Why Are We Failing at the Ethics of AI?

#artificialintelligence

As you read this, AI systems and algorithmic technologies are being embedded and scaled far more quickly than existing governance frameworks (i.e., the rules of the road) are evolving. While it is clear that AI systems offer opportunities across various areas of life, what amounts to a responsible perspective on their ethics and governance is yet to be realized. This should be setting off alarm bells across society. The current inability of actors to meaningfully address AI ethics has created a perfect storm: one in which AI is exacerbating existing inequalities while simultaneously creating new systemic issues at a rapid pace. But why hasn't this issue been effectively addressed?


Did that artificially-intelligent chatbot just crack a rude joke?

#artificialintelligence

A software developer with PolyAI who was testing the system, asked about booking a table for himself and a Serbian friend. "Yes, we allow children at the restaurant," the voice bot replied, according to PolyAI founder Nikola Mrksic. Seemingly out of nowhere, the bot was trying make an obnoxious joke about people from Serbia. When it was asked about bringing a Polish friend, it replied, "Yes, but you can't bring your own booze." Mrksic, who is Serbian, admits that the system appeared to think people from Serbia were immature.


Whatever happened to the DeepMind AI ethics board Google promised?

The Guardian

Three years ago, artificial intelligence research firm DeepMind was acquired by Google for a reported £400m. As part of the acquisition, Google agreed to set up an ethics and safety board to ensure that its AI technology is not abused. The existence of the ethics board wasn't confirmed at the time of the acquisition announcement, and the public only became aware of it through a leak to industry news site The Information. But in the years since, senior members of DeepMind have publicly confirmed the board's existence, arguing that it is one of the ways that the company is trying to "lead the way" on ethical issues in AI. But in all that time DeepMind has consistently refused to say who is on the board, what it discusses, or publicly confirm whether or not it has even officially met. The Guardian has asked DeepMind and Google multiple times since the acquisition on 26 January 2014 for transparency around the board, and received just one answer on the record.