ethical ai
AIhub monthly digest: December 2025 – studying bias in AI-based recruitment tools, an image dataset for ethical AI benchmarking, and end of year compilations
Welcome to our monthly digest, where you can catch up with any AIhub stories you may have missed, peruse the latest news, recap recent events, and more. This month, we look into bias in AI-based recruitment tools, find out about a new image dataset for ethical AI benchmarking, dig into human-robot interactions and social robotics, and look back on another busy year in the world of AI. We've been meeting some of the PhD students that were selected to take part in the Doctoral Consortium at the European Conference on Artificial Intelligence (ECAI-2025) . In the second interview of the series, we caught up with Frida Hartman to find out how her PhD is going so far, and plans for the next steps in her investigations. Frida, along with co-authors Mario Mirabile and Michele Dusi, was also the winner of the ECAI-2025 Diversity & Inclusion Competition, for work entitled .
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Europe > Netherlands > South Holland > Leiden (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.05)
Interview with Alice Xiang: Fair human-centric image dataset for ethical AI benchmarking
Earlier this month, Sony AI released a dataset that establishes a new benchmark for AI ethics in computer vision models. The research behind the dataset, named Fair Human-Centric Image Benchmark (FHIBE), has been published in Nature . FHIBE is the first publicly-available, globally-diverse, consent-based human image dataset (inclusive of over 10,000 human images) for evaluating bias across a wide variety of computer vision tasks. We sat down with project lead, Alice Xiang, Global Head of AI Governance at Sony Group and Lead Research Scientist for AI Ethics at Sony AI, to discuss the project and the broader implications of this research. Could you start by introducing the project and taking us through some of the main contributions?
Ethical AI: Towards Defining a Collective Evaluation Framework
Sharma, Aasish Kumar, Kyosev, Dimitar, Kunkel, Julian
Artificial Intelligence (AI) is transforming sectors such as healthcare, finance, and autonomous systems, offering powerful tools for innovation. Yet its rapid integration raises urgent ethical concerns related to data ownership, privacy, and systemic bias. Issues like opaque decision-making, misleading outputs, and unfair treatment in high-stakes domains underscore the need for transparent and accountable AI systems. This article addresses these challenges by proposing a modular ethical assessment framework built on ontological blocks of meaning-discrete, interpretable units that encode ethical principles such as fairness, accountability, and ownership. By integrating these blocks with FAIR (Findable, Accessible, Interoperable, Reusable) principles, the framework supports scalable, transparent, and legally aligned ethical evaluations, including compliance with the EU AI Act. Using a real-world use case in AI-powered investor profiling, the paper demonstrates how the framework enables dynamic, behavior-informed risk classification. The findings suggest that ontological blocks offer a promising path toward explainable and auditable AI ethics, though challenges remain in automation and probabilistic reasoning.
- Europe > Germany (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Bulgaria > Burgas Province > Burgas (0.04)
- Law (1.00)
- Information Technology (1.00)
- Banking & Finance (1.00)
- (2 more...)
Ethical Statistical Practice and Ethical AI
Artificial Intelligence (AI) is a field that utilizes computing and often, data and statistics, intensively together to solve problems or make predictions. AI has been evolving with literally unbelievable speed over the past few years, and this has led to an increase in social, cultural, industrial, scientific, and governmental concerns about the ethical development and use of AI systems worldwide. The ASA has issued a statement on ethical statistical practice and AI (ASA, 2024), which echoes similar statements from other groups. Here we discuss the support for ethical statistical practice and ethical AI that has been established in long-standing human rights law and ethical practice standards for computing and statistics. There are multiple sources of support for ethical statistical practice and ethical AI deriving from these source documents, which are critical for strengthening the operationalization of the "Statement on Ethical AI for Statistics Practitioners". These resources are explicated for interested readers to utilize to guide their development and use of AI in, and through, their statistical practice.
- North America > Canada > Ontario > Toronto (0.06)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > United States > District of Columbia > Washington (0.04)
Five ethical principles for generative AI in scientific research
X (Twitter): ZLinPsy Acknowledgments The writing was supported by the National Key R&D Program of China STI2030 Major Projects (2021ZD0204200), National Natural Science Foundation of China (32071045),and Shenzhen Fundamental Research Program (JCYJ20210324134603010). ETHICAL AI IN SCIENCE 2 Abstract Generative artificial intelligence (AI) tools like large language models (LLMs) are rapidly transforming academic research and real-world applications. However, discussions on ethical guidelines for generative AI in science remain fragmented, underscoring the urgent need for consensus-based standards. Common scenarios are outlined to demonstrate potential ethical violations. We argue that global consensus coupled with targeted training and enforcement are critical to promoting AI's benefits while safeguarding research integrity. Keywords: generative AI, science, applications, transparency, reproducibility ETHICAL AI IN SCIENCE 3 Generative AI tools, including large language models (LLMs) like ChatGPT and Bard, are rapidly infiltrating academic corridors, aiding in diverse tasks such as writing, coding, idea generation, material creation, and data analysis(1, 2).
- Asia > China > Guangdong Province > Shenzhen (0.24)
- North America > United States (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Education (0.67)
Investigating Responsible AI for Scientific Research: An Empirical Study
Bano, Muneera, Zowghi, Didar, Shea, Pip, Ibarra, Georgina
Scientific research organizations that are developing and deploying Artificial Intelligence (AI) systems are at the intersection of technological progress and ethical considerations. The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development, championing core values like fairness, accountability, and transparency. For scientific research organizations, prioritizing these practices is paramount not just for mitigating biases and ensuring inclusivity, but also for fostering trust in AI systems among both users and broader stakeholders. In this paper, we explore the practices at a research organization concerning RAI practices, aiming to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development. We have adopted a mixed-method research approach, utilising a comprehensive survey combined with follow-up in-depth interviews with selected participants from AI-related projects. Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks. This revealed an overarching underestimation of the ethical risks that AI technologies can present, especially when implemented without proper guidelines and governance. Our findings reveal the need for a holistic and multi-tiered strategy to uplift capabilities and better support science research teams for responsible, ethical, and inclusive AI development and deployment.
- Oceania > Australia > Queensland > Brisbane (0.04)
- Oceania > Australia > New South Wales (0.04)
- Europe > Middle East > Malta > Northern Region > Western District > Attard (0.04)
- Europe > Ireland > Connaught > County Galway > Galway (0.04)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (1.00)
- Overview (1.00)
- Information Technology > Security & Privacy (1.00)
- Law (0.93)
Tech firms failing to 'walk the walk' on ethical AI, report says
Tech companies that have promised to support the ethical development of artificial intelligence (AI) are failing to live up to their pledges as safety takes a back seat to performance metrics and product launches, according to a new report by Stanford University researchers. Despite publishing AI principles and employing social scientists and engineers to conduct research and develop technical solutions related to AI ethics, many private companies have yet to prioritise the adoption of ethical safeguards, Stanford's Institute for Human-Centered Artificial Intelligence said in the report released on Thursday. "Companies often'talk the talk' of AI ethics but rarely'walk the walk' by adequately resourcing and empowering teams that work on responsible AI," researchers Sanna J Ali, Angele Christin, Andrew Smart and Riitta Katila said in the report titled Walking the Walk of AI Ethics in Technology Companies. Drawing on the experiences of 25 "AI ethics practitioners", the report said workers involved in promoting AI ethics complained of lacking institutional support and being siloed off from other teams within large organisations despite promises to the contrary. Employees reported a culture of indifference or hostility due to product managers who see their work as damaging to a company's productivity, revenue or product launch timeline, the report said.
If our aim is to build morality into an artificial agent, how might we begin to go about doing so?
Seeamber, Reneira, Badea, Cosmin
As Artificial Intelligence (AI) becomes pervasive in most fields, from healthcare to autonomous driving, it is essential that we find successful ways of building morality into our machines, especially for decision-making. However, the question of what it means to be moral is still debated, particularly in the context of AI. In this paper, we highlight the different aspects that should be considered when building moral agents, including the most relevant moral paradigms and challenges. We also discuss the top-down and bottom-up approaches to design and the role of emotion and sentience in morality. We then propose solutions including a hybrid approach to design and a hierarchical approach to combining moral paradigms. We emphasize how governance and policy are becoming ever more critical in AI Ethics and in ensuring that the tasks we set for moral agents are attainable, that ethical behavior is achieved, and that we obtain good AI.
- Europe > United Kingdom (0.68)
- North America > United States > Massachusetts (0.04)
- Europe > Spain (0.04)
- Asia > Japan (0.04)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government (1.00)
- Law (0.94)
Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial Intelligence
Emdad, Forhan Bin, Ho, Shuyuan Mary, Ravuri, Benhur, Hussain, Shezin
Artificial Intelligence (AI) aims to elevate healthcare to a pinnacle by aiding clinical decision support. Overcoming the challenges related to the design of ethical AI will enable clinicians, physicians, healthcare professionals, and other stakeholders to use and trust AI in healthcare settings. This study attempts to identify the major ethical principles influencing the utility performance of AI at different technological levels such as data access, algorithms, and systems through a thematic analysis. We observed that justice, privacy, bias, lack of regulations, risks, and interpretability are the most important principles to consider for ethical AI. This data-driven study has analyzed secondary survey data from the Pew Research Center (2020) of 36 AI experts to categorize the top ethical principles of AI design. To resolve the ethical issues identified by the meta-analysis and domain experts, we propose a new utilitarian ethics-based theoretical framework for designing ethical AI for the healthcare domain.
- North America > United States > Florida > Leon County > Tallahassee (0.04)
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.68)
- Information Technology > Security & Privacy (0.94)
- Health & Medicine > Therapeutic Area > Neurology (0.46)
On the meaning of uncertainty for ethical AI: philosophy and practice
Bird, Cassandra, Williamson, Daniel, Leonelli, Sabina
Whether and how data scientists, statisticians and modellers should be accountable for the AI systems they develop remains a controversial and highly debated topic, especially given the complexity of AI systems and the difficulties in comparing and synthesising competing claims arising from their deployment for data analysis. This paper proposes to address this issue by decreasing the opacity and heightening the accountability of decision making using AI systems, through the explicit acknowledgement of the statistical foundations that underpin their development and the ways in which these dictate how their results should be interpreted and acted upon by users. In turn, this enhances (1) the responsiveness of the models to feedback, (2) the quality and meaning of uncertainty on their outputs and (3) their transparency to evaluation. To exemplify this approach, we extend Posterior Belief Assessment to offer a route to belief ownership from complex and competing AI structures. We argue that this is a significant way to bring ethical considerations into mathematical reasoning, and to implement ethical AI in statistical practice. We demonstrate these ideas within the context of competing models used to advise the UK government on the spread of the Omicron variant of COVID-19 during December 2021.
- North America > United States > New York (0.04)
- Europe > United Kingdom > Wales (0.04)
- Asia > Middle East > Israel (0.04)
- (7 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.93)