responsible ai
Responsible AI (RAI) Games and Ensembles
Several recent works have studied the societal effects of AI; these include issues such as fairness, robustness, and safety. In many of these objectives, a learner seeks to minimize its worst-case loss over a set of predefined distributions (known as uncertainty sets), with usual examples being perturbed versions of the empirical distribution. In other words, the aforementioned problems can be written as min-max problems over these uncertainty sets. In this work, we provide a general framework for studying these problems, which we refer to as Responsible AI (RAI) games. We provide two classes of algorithms for solving these games: (a) game-play based algorithms, and (b) greedy stagewise estimation algorithms. The former class is motivated by online learning and game theory, whereas the latter class is motivated by the classical statistical literature on boosting, and regression. We empirically demonstrate the applicability and competitive performance of our techniques for solving several RAI problems, particularly around subpopulation shift.
Towards responsible AI for education: Hybrid human-AI to confront the Elephant in the room
Hooshyar, Danial, Šír, Gustav, Yang, Yeongwook, Kikas, Eve, Hämäläinen, Raija, Kärkkäinen, Tommi, Gašević, Dragan, Azevedo, Roger
Despite significant advancements in AI-driven educational systems and ongoing calls for responsible AI for education, several critical issues remain unresolved -- acting as the elephant in the room within AI in education, learning analytics, educational data mining, learning sciences, and educational psychology communities. This critical analysis identifies and examines nine persistent challenges that continue to undermine the fairness, transparency, and effectiveness of current AI methods and applications in education. These include: (1) the lack of clarity around what AI for education truly means -- often ignoring the distinct purposes, strengths, and limitations of different AI families -- and the trend of equating it with domain-agnostic, company-driven large language models; (2) the widespread neglect of essential learning processes such as motivation, emotion, and (meta)cognition in AI-driven learner modelling and their contextual nature; (3) limited integration of domain knowledge and lack of stakeholder involvement in AI design and development; (4) continued use of non-sequential machine learning models on temporal educational data; (5) misuse of non-sequential metrics to evaluate sequential models; (6) use of unreliable explainable AI methods to provide explanations for black-box models; (7) ignoring ethical guidelines in addressing data inconsistencies during model training; (8) use of mainstream AI methods for pattern discovery and learning analytics without systematic benchmarking; and (9) overemphasis on global prescriptions while overlooking localised, student-specific recommendations. Supported by theoretical and empirical research, we demonstrate how hybrid AI methods -- specifically neural-symbolic AI -- can address the elephant in the room and serve as the foundation for responsible, trustworthy AI systems in education.
- Europe > Finland > Central Finland > Jyväskylä (0.04)
- Europe > Estonia > Harju County > Tallinn (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- (10 more...)
- Instructional Material (1.00)
- Research Report > Experimental Study (0.34)
- Education > Educational Technology > Educational Software > Computer Based Training (1.00)
- Education > Educational Setting (1.00)
- Education > Assessment & Standards (1.00)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.93)
Beyond Benchmarks: Responsible AI in Education Needs Learning Sciences
Learning sciences emerged in the 1980s from roots in cognitive science and AI, as well as other social sciences such as linguistics and anthropology. LS distinctively investigates the process of learning, in contrast to research on teaching, instructional design, policies, and institutions. LS also focuses on research in authentic educational settings, which contrasts with psychology or brain science research conducted in a laboratory. Learning scientists use and develop a variety of methods and theories, leading to a large body of knowledge. The field also has blurry lines with neighboring fields such as AI in education.
T2IBias: Uncovering Societal Bias Encoded in the Latent Space of Text-to-Image Generative Models
Sufian, Abu, Distante, Cosimo, Leo, Marco, Salam, Hanan
Text-to-image (T2I) generative models are largely used in AI-powered real-world applications and value creation. However, their strategic deployment raises critical concerns for responsible AI management, particularly regarding the reproduction and amplification of race- and gender-related stereotypes that can undermine organizational ethics. In this work, we investigate whether such societal biases are systematically encoded within the pretrained latent spaces of state-of-the-art T2I models. We conduct an empirical study across the five most popular open-source models, using ten neutral, profession-related prompts to generate 100 images per profession, resulting in a dataset of 5,000 images evaluated by diverse human assessors representing different races and genders. We demonstrate that all five models encode and amplify pronounced societal skew: caregiving and nursing roles are consistently feminized, while high-status professions such as corporate CEO, politician, doctor, and lawyer are overwhelmingly represented by males and mostly White individuals. We further identify model-specific patterns, such as QWEN-Image's near-exclusive focus on East Asian outputs, Kandinsky's dominance of White individuals, and SDXL's comparatively broader but still biased distributions. These results provide critical insights for AI project managers and practitioners, enabling them to select equitable AI models and customized prompts that generate images in alignment with the principles of responsible AI. We conclude by discussing the risks of these biases and proposing actionable strategies for bias mitigation in building responsible GenAI systems. The code and Data Repository: https://github.com/Sufianlab/T2IBias
- Europe > Denmark > Capital Region > Copenhagen (0.06)
- Europe > Italy (0.05)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- (5 more...)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.71)
"I Like That You Have to Poke Around": Instructors on How Experiential Approaches to AI Literacy Spark Inquiry and Critical Thinking
Warrier, Aparna Maya, Agarwal, Arav, Savelka, Jaromir, Bogart, Christopher, Burte, Heather
As artificial intelligence (AI) increasingly shapes decision-making across domains, there is a growing need to support AI literacy among learners beyond computer science. However, many current approaches rely on programming-heavy tools or abstract lecture-based content, limiting accessibility for non-STEM audiences. This paper presents findings from a study of AI User, a modular, web-based curriculum that teaches core AI concepts through interactive, no-code projects grounded in real-world scenarios. The curriculum includes eight projects; this study focuses on instructor feedback on Projects 5-8, which address applied topics such as natural language processing, computer vision, decision support, and responsible AI. Fifteen community college instructors participated in structured focus groups, completing the projects as learners and providing feedback through individual reflection and group discussion. Using thematic analysis, we examined how instructors evaluated the design, instructional value, and classroom applicability of these experiential activities. Findings highlight instructors' appreciation for exploratory tasks, role-based simulations, and real-world relevance, while also surfacing design trade-offs around cognitive load, guidance, and adaptability for diverse learners. This work extends prior research on AI literacy by centering instructor perspectives on teaching complex AI topics without code. It offers actionable insights for designing inclusive, experiential AI learning resources that scale across disciplines and learner backgrounds.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.14)
- North America > United States > Arkansas (0.04)
- North America > United States > Wisconsin (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Instructional Material (1.00)
- Research Report > Experimental Study (0.68)
- Education > Educational Setting > K-12 Education (0.95)
- Education > Educational Setting > Higher Education (0.70)
Embracing Contradiction: Theoretical Inconsistency Will Not Impede the Road of Building Responsible AI Systems
This position paper argues that the theoretical inconsistency often observed among Responsible AI (RAI) metrics, such as differing fairness definitions or tradeoffs between accuracy and privacy, should be embraced as a valuable feature rather than a flaw to be eliminated. We contend that navigating these inconsistencies, by treating metrics as divergent objectives, yields three key benefits: (1) Normative Pluralism: Maintaining a full suite of potentially contradictory metrics ensures that the diverse moral stances and stakeholder values inherent in RAI are adequately represented. (2) Epistemological Completeness: The use of multiple, sometimes conflicting, metrics allows for a more comprehensive capture of multifaceted ethical concepts, thereby preserving greater informational fidelity about these concepts than any single, simplified definition. (3) Implicit Regularization: Jointly optimizing for theoretically conflicting objectives discourages overfitting to one specific metric, steering models towards solutions with enhanced generalization and robustness under real-world complexities. In contrast, efforts to enforce theoretical consistency by simplifying or pruning metrics risk narrowing this value diversity, losing conceptual depth, and degrading model performance. We therefore advocate for a shift in RAI theory and practice: from getting trapped in inconsistency to characterizing acceptable inconsistency thresholds and elucidating the mechanisms that permit robust, approximated consistency in practice.
- Oceania > Australia (0.14)
- Europe > Austria > Vienna (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (8 more...)
- Health & Medicine (1.00)
- Law > Statutes (0.67)
- Government > Regional Government > North America Government > United States Government (0.46)
The Quest for Reliable Metrics of Responsible AI
Rampisela, Theresia Veronika, Maistro, Maria, Ruotsalo, Tuukka, Lioma, Christina
The development of Artificial Intelligence (AI), including AI in Science (AIS), should be done following the principles of responsible AI. Progress in responsible AI is often quantified through evaluation metrics, yet there has been less work on assessing the robustness and reliability of the metrics themselves. We reflect on prior work that examines the robustness of fairness metrics for recommender systems as a type of AI application and summarise their key takeaways into a set of non-exhaustive guidelines for developing reliable metrics of responsible AI. Our guidelines apply to a broad spectrum of AI applications, including AIS.
- Europe > Denmark > Capital Region > Copenhagen (0.06)
- North America > United States > New York > New York County > New York City (0.05)
Do Chatbots Walk the Talk of Responsible AI?
Aaronson, Susan Ariel, Moreno, Michael
Introduction In April 2025, sixteen - year - old Adam Raine committed suicide . Over the course of several months, the teen confided his suicidal thoughts to Open AI's ChatGPT chatbot . ChatGPT is not designed or developed to provide therapy, but it did not respond to Adam's prompts with suggestions that he obtain professional help . Moreover, w hen Adam expressed concern that his parents would blame themselves if he died, ChatGPT reportedly responded, "That doesn't mean you owe them survival," and offered to help draft his suicide note. Adam's death was not the only example of chatbot misbehavior. OpenAI claims it doesn't permit ChatGPT "to generate hateful, harassing, violent, or adult content." In July 2025, a reporter documented ChatGPT providing users with detailed instructions for self - mutilation, murder, and satanic rituals. O penAI has also acknowledged that individuals can misuse its systems. But the company has taken some responsibility.
- North America > Canada (0.15)
- North America > United States (0.14)
- Asia > Japan > Honshū > Chūgoku > Hiroshima Prefecture > Hiroshima (0.04)
- Law (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (0.68)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.41)
Accenture CEO Julie Sweet on Trust in AI, Building New Workbenches, and Why Humans Are Here to Stay
Javed is a senior editor at TIME, based in the London bureau. Javed is a senior editor at TIME, based in the London bureau. How do you see your clients adopting AI and grappling with the rapid changes it is bringing? CEOs have identified that AI is simple to try and hard to scale, and that's why they come to Accenture. And you can see that in the explosive growth of our advanced AI practice over the past couple of years.
- North America > United States > California (0.05)
- Europe > France (0.05)
Toward Environmentally Equitable AI
The growing adoption of artificial intelligence (AI) has been accelerating across all parts of society, boosting productivity and addressing pressing global challenges such as climate change. Nonetheless, the technological advancement of AI relies on computationally intensive calculations and thus has led to a surge in resource usage and energy consumption. Even putting aside the environmental toll of server manufacturing and supply chains, AI systems can create a huge environmental cost to communities and regions where they are deployed, including air/thermal pollution due to fossil fuel-based electricity generation and further stressed water resources due to AI's staggering water footprint.12,25 To make AI more environmentally friendly and ensure that its overall impacts on climate change are positive, recent studies have pursued multifaceted approaches, including efficient training and inference,5 energy-efficient GPU and accelerator designs,19 carbon forecasting,14 carbon-aware task scheduling,1,21 green cloud infrastructures,2 sustainable AI policies,10,18 and more. Additionally, datacenter operators have also increasingly adopted carbon-free energy (such as solar and wind power) and climate-conscious cooling systems, lowering carbon footprint and direct water consumption.8
- North America > United States > California (0.06)
- North America > United States > Arizona (0.06)
- Energy > Power Industry (0.73)
- Energy > Renewable > Wind (0.57)